Hacker News new | comments | show | ask | jobs | submit login
Why I’m Learning Perl 6 (evanmiller.org)
433 points by mpalme 9 months ago | hide | past | web | favorite | 369 comments



> Concurrency is hard and if you want M:N thread multiplexing (i.e. WEB SCALE CODE, where application threads aren’t pinned to pthreads) your options today are precisely Erlang, Go, .NET, and Perl 6.

Putting aside the "web scale" jokes (http://www.mongodb-is-web-scale.com/), this statement is still absurd.

Every major language, or at least the ones that matter for backend development, has support for thread multiplexing / coroutines / fibers, whatever. Perhaps not in the core language syntax or standard library. But it's easy to implement with native code, and so SOMEONE has implemented it in a library if the language has an FFI.

Java, and all of the other JVM-based languages in turn, have Quasar (http://docs.paralleluniverse.co/quasar/).

Ruby has support for primitive fibers baked into the standard language (https://ruby-doc.org/core-2.1.1/Fiber.html), and likewise community gems with more robust functionality like Quasar.

Python 3 likewise has this out of the box (https://www.python.org/dev/peps/pep-0492/).

The list goes on and on: https://en.wikipedia.org/wiki/Coroutine#Programming_language....


> Every major language, or at least the ones that matter for backend development, has support for thread multiplexing / coroutines / fibers, whatever.

M:N threading is not, from a developer perspective, the same thing as coroutines/fibers. Coroutines are lower-level; it's possible to build something like M:N threading on top of coroutines, but it means doing a lot of work that's already done for you in a language that provides M:N threading out of the box, and if you have to do it yourself, that also means code you use from others isn't going to integrate well with it generally.


> Coroutines are lower-level [than M:N threading]

Not sure I understand what you're trying to say here, this sounds exactly backwards to my reading. A green threading library can be written in assembly language with no reference to anything but the hardware ISA specification and the ABI that defines the relevant calling convention. It's as "low level" as a software construct can be. The only lower level implementation I can think of are hardware-assisted context switching features like you see on some embedded architectures.

Coroutines, on the other hand, are artifacts of the language runtime and depend in sensitive ways on that whole stack. Much higher level.


> Not sure I understand what you're trying to say here

The exact sense of the phrase you excerpted is explained in the next clause of the same sentence of the comment.


OK, then that's backwards, sorry. The fact that A is hard to implement in terms of B doesn't mean that A is a "lower level" (or even "higher level") construct in the sense people usually use those terms. It just means that the two metaphors are incompatible.

Honestly I was asking the question because I thought you had some deeper point in mind I couldn't see. But you just had it backwards.


That's kind of the problem when the author uses meaningless words like "WEB SCALE", we don't really know what it means exactly.

If it's about serving a great number of concurrent connections, I'm not sure why M:N is the only way to go.


Most languages give you one of two easy options. The first is to use green threads, which makes 100,000 new threads cheap, but now a single slow function call anywhere can block other threads. The second is to use OS threads, which keeps a slow function call in one thread from interrupting your responsiveness, but now makes 100,000 new threads expensive.

What M:N threading does, at least in theory, is allows you to create 100,000 threads cheaply AND keep a slow function call in one thread from blocking others. In a language with those features you can build it out of continuations (NOT coroutines) and real threads. But that requires building your own scheduler that can move a user thread to a different process at need. Which is non-trivial, and is unlikely to cooperate well with any third party libraries.

That said, whether M:N threading is a real optimization depends on a ton of things. A big one is the underlying performance of the scheduler and the language. Which is improving for Perl 6..but has a long way to go.

Whether optimizing at this level matters is another good question. Most companies shouldn't care. Those that should, should be cautious about deploying Perl 6 into a mature infrastructure.

So that's why people can think that M:N is important, and why you might legitimately disagree.


Very useful explanation. Thank you.


> That's kind of the problem when the author uses meaningless words like "WEB SCALE", we don't really know what it means exactly.

When the author (as is the case here with M:N threading) identifies the specific feature of interest, we do know exactly what it means, independently of whether they also reference something less precise.

> If it's about serving a great number of concurrent connections, I'm not sure why M:N is the only way to go.

If you want to posit an argument challenging the idea that M:N threading is an advantage, that's a different thing that disagreeing with the authors contention that choices are fairly limited if you want a language with language-level integration of M:N threading.


Author demonstrates exactly what is meant with code exam in the article


This is incorrect. Coroutines and fibers are not m:n threads. There are important implementation and usability distinctions. Besides, there are a lot of additional usability issues with bolted-on concurrency vs language-level concurrency.


Ah yes, let's use Perl 6 to avoid usability issues.


I wasn't advocating for Perl; I've never used it.


Unfortunately, unless the IO in the stdlib uses or is modified to schedule these fibers around evented IO, they're essentially useless as soon as you use an external library. Go and Erlang do this, Crystal does this but with only N:1 multiplexing (but that will change before 1.0), but I don't know about other languages.


Very true. Node does this too, in the sense that all node libraries are async/callback aware or have async/callback versions of those.

I much prefer the go/erlang version of this. Playing the "what color is your function" game or "will it block" is no fun.


> Playing the "what color is your function" game or "will it block" is no fun.

This was quite painful with.js Node 1-2yrs back when Promises and coroutines started picking up steam [such as https://github.com/tj/co]. It was always a shot in the dark.

Has this situation changed now? (basically is the stdlib all async friendly now?)


Yes, insofar it's been possible for a few years to mechanically upgrade stdlib functions marked with the Async suffix into promises with a helper library such as Bluebird. Any promise based function complying with the common promise spec is automatically async friendly, that is to say the async/await keywords do what you expect and the programmer does not need to chain then-ables anymore. Node v8.0.0 adds a promisifier into core for convenience: https://nodejs.org/dist/latest-v8.x/docs/api/util.html#util_...


.NET has both nonblocking evented IO and standard blocking IO in the stdlib. Most modern libraries use asynchronous IO, however lots of legacy applications still use blocking calls, making them unsuitable to run in the thread pool.


Pretty sure the ALL CAPS on WEB SCALE CODE indicates the author is in on the joke.


> Ruby has [...] likewise community gems with more robust functionality like Quasar.

https://github.com/ruby-concurrency/concurrent-ruby for those wondering. Rails uses it as of version 5.


Concurency doesn't mean multithreading. The "yield" keyword in Python enables a for of concurrency and Python is infamous for not being multi-threaded (when interpreting Python.)


You must be thinking of something else. Concurrent Ruby doesn't include any functionality that is anything like fibres, lightweight threads, or M:N threading (I work on the project).


PHP even has it.


Ruby also has JRuby, which has true parallelism & concurrency and does not suffer from the global interpreter lock like vanilla ruby/python.


Unfortunately most ruby gems are written with no concept of thread safety. I say this from the perspective of a long time ruby fan, terrified when I hear of critical systems that are running on jruby


Also see Kotlin's concurrency


The intersection of "needs M:N threading" and "doesn't want to use Go"...


Nonsense: Solaris has had that thread multiplexing for over 15 years, for one counterexample.


Didn't solaris end up ripping out M:N threading and switching to pure kernel threading as it was found inadequate as a general pthread implementation?


I'd like to know why people are downvoting this.


Probably because it treats a high-level feature (M:N threading) as equivalent to a lower-level feature (coroutines) and then compounds that by treating as equivalent languages where the former is built into the core with ones where the latter is available as a third-party library, all as a way of dismissing the value of languages where the former feature is built in to the core, and layered with scorn and derision on top of being doubly wrong.

Now, I preferred correction to downvoting, but I can easily see why others would choose downvoting.


Ehh... I don't really understand the impulse to "call out" downvotes, or ask people to explain why they're downvoting something that you wrote or agree with.

It's probably just my age, as I started out online when most discussion forums lacked explicit voting mechanisms. Before social media came along, and "gamified" human conversation. Maybe it helps police trolls, but at the cost of encouraging group-think and excessive meme-referencing, and just making a lot of people more neurotic and insecure in general. Overall, I think that online discussion is worse than it was 15 years ago, due to the psychological effects of all the imaginary Internet points.

But anyhoo, at the time I saw this (45 minutes after my original comment), my comment had 25 votes and was the top-most thing on this page. More often than not, if you wait an hour, the crowd straightens out the impact of any early outliers.


There's a difference between downvoting because you don't like the tone or don't agree on something subjective and downvoting because something is incorrect or inaccurate. If people downvote because it's incorrect I'd like to know what's incorrect about it.

> But anyhoo, at the time I saw this (45 minutes after my original comment), my comment had 25 votes and was the top-most thing on this page.

It was grayed out at the bottom of the page when I saw it.


Ehh... I don't really understand the impulse to "call out" downvotes, or ask people to explain why they're downvoting something that you wrote or agree with.

Sometimes there is some really weird voting dynamic here and it would be interesting to know what goes on.

For a lack of a better word you can call it intellectual curiosity or something.


I can't remember the last time I saw a reasonable comment more than a half hour or so old greyed out. But I often see highly-voted comment with an old greyed-out reply saying "why is this being downvoted". All that's happening is that the sample size of votes is too small early on to be meaningful.


> But I often see highly-voted comment with an old greyed-out reply saying "why is this being downvoted".

Many times it's because the user was just asking a question, or made what they thought was a perfectly reasonable comment.

HN is pretty horrible with voting etiquette.


But I still often wonder about the motivation for those early downvoters.

And I'll still hold that this is acceptable curiosity.


I agree with that. I too was curious about why someone downvoted a comment I made the other day. I can honestly say I don't worry about losing a point over that since I've not looked into how points work much but I thought it was odd that someone would do that without explaining what motivated them.


Because it not a helpful comment. It's wrong in a lot of ways and misleading in others.

Like dragonwriter said, having coroutines in your language or a fiber library is not close to the same thing as the language having a scheduler that maps those fibers onto native threads for you. Some of the languages mentioned can't even execute multiple threads on multiple cores in parallel.


For Python there is no native support for threads because of GIL. You can get around this through multiprocessing etc but these are hacks


You might have chosen bad words, but people reading this should know this statement it false. Python has threads. You can create them and (if you're using linux) see them in the /proc filesystem.

A more accurate statement is: "python bytecode can't execute concurrently w/o something like multiprocessing". The key difference? You can do multithreaded I/O all day long, which is a pretty important use case of threading. You can also hand off work to C and release the GIL, although this isn't as common. Example: XML parsing, regex, numeric work.

FWIW all these gotchas are one of the reasons I love just having complete threading support built in to my language.


> A more accurate statement is: "python bytecode can't execute concurrently

No this isn't any more accurate. Python bytecode does execute concurrently if you have multiple threads. It just doesn't execute in parallel.


In fact it is quite common. If you look into the c modules you see a pattern where they release the gil before performing some operation and then reacquire it


There is native support for threads, it's just that they don't run concurrently.


At this point the word is so far removed from the meaning in this thread you might as well say a haberdasher supports threads.


There are valid reasons for programming with threads with only a single CPU (indeed when I learnedb thread programming in the 90s pretty much only single core cpus were available). For instance, IO blocking.


They do run in parallel, so long as all but one are running low-level library code which has released the GIL, but you can't call back into the runtime without grabbing the GIL.


While we're being pedantic, they do run concurrently (for example, dispatching a dozen concurrent HTTP requests), they just the GIL prevents most CPU work from executing in parallel.


Afaik erlang has a lock as well - in fact I think most green thread implementations do: what you want/need at a minimum is one os process or thread per cpu core that works as a scheduler and then some form of (co-operative) scheduling with only user-mode/low-overhead context switching?


Scheduling and locking aren't the same thing. Here's a brief excerpt from Joe Armstrong that I found on SO while looking for a good explanation.

> [Erlang] is a concurrent language – by that I mean that threads are part of the programming language, they do not belong to the operating system. That's really what's wrong with programming languages like Java and C++. It's threads aren't in the programming language, threads are something in the operating system – and they inherit all the problems that they have in the operating system. One of the problems is granularity of the memory management system. The memory management in the operating system protects whole pages of memory, so the smallest size that a thread can be is the smallest size of a page. That's actually too big.

> If you add more memory to your machine – you have the same number of bits that protects the memory so the granularity of the page tables goes up – you end up using say 64kB for a process you know running in a few hundred bytes.

With Erlang, there are threads running concurrently across multiple CPU's and a preemptive scheduler within each of those threads. This is all transparent through use of Erlang processes. Essentially, many concurrent schedulers managing hundreds of thousands of processes that start out at 0.5kb each. A JVM thread is 1024kb by comparison and a goroutine is 2kb.

You're getting legitimate concurrency with Erlang, as well as isolation. There are only so many things that a single CPU can do at the same time and that's where the scheduler becomes necessary. The scheduler also ensure that if you fire off 100,000 processes and one of them is CPU intensive that it can't hog the processor and preventing the others allocated to that processor from executing.


That doesn't sound right re: Erlang; its everything-is-immutable model is supposed to avoid the need for a GIL. Erlang processes are also preemptive, so there's no possibility for a process to lock up the whole VM (unless it calls into a long-running NIF, but native code is always dangerous in a "crash BEAM if you don't get it right" kind of way).


IIRC, Erlang uses shared mutable state internally, though its not exposed, so it needs to lock around accesses to that, but this isn't equivalent to the Python GIL.


Only relevant on CPython, other implementations don't have a GIL.



Ok, but ZipPy, Jython and IronPython do not.


Have Jython and IronPython reached compatibility with Python 3 yet? Last I checked they hadn't, or it was still in alpha.


Yes, but ain't nobody really using Jython or IronPython. Yes some people are, but I'd bet a very small group of the greater Python whole. You also introduce incompatibilities that way. Perl6 has this in the base implementation, so I don't have to switch to a lesser maintained distribution.


I don't know, but Python 3 still isn't relevant to many people.

For example, I have to explicitly install 2.7 to be able to use Cocos2d-x build scripts.


Maybe, but it seems like a bad idea to invest in Python 2 (which is teaching is ended of life) just to get parallelism when there are other programming languages that have great concurrency stories and a bright future. That said, if PyPy3 ditches the GIL, let me know!


Someone posted an article awhile back...people are moving over by a significant amount now.


Yeah, "not relevant to many people" is a wildly false claim. If it were anywhere near true, we wouldn't see the level of Python 3 support that we have in the major packages (95.8%).

http://py3readiness.org/


Jython project's latest release and news post was mid-2015; the project would seem to be pining for the fjords, so to speak.

Jython and IronPython are both limited to Python 2.x, ZipPy is WIP without a stable release.


But not PyPySTM


I'd like to know why people are downvoting this.

Because for some people, it's not enough to think of others as misinformed, or not so well-spoken.

They need to be thought of as blundering idiots, and castigated accordingly.


People often downvote anything that's remotely negative, even if it is true or merely a critical question.


For some reason, when I started my software engineering career I got it into my head that I needed to learn as much as I could about programming languages.

I learned ruby, perl5, python, lisp, forth, ml, ocaml, scheme, haskell, r, c#, java, lua, c++, factor, idris, asm, erlang, prolog, rust, d. But that wasn't quite enough because haskell and idris kept on talking about complicated type theory stuff. So I also learned lambda calculus, type theory, set theory, domain theory, topology, category theory, information theory, sub-structural logic.

What I'm trying to say is that I'm not afraid of learning new things. Even if they seem hard or esoteric.

When I heard that perl6 was ready, I took a look. I like the idea of a lot of what is present in the language (hey look, a grammar engine, that's neat). But ultimately, I decided that it was too much stuff that I would have to learn. Maybe that's just a perception problem on my part, but I have to think that they have some sort of problem if someone like me feels overwhelmed by all of the things that you have to grok in order to understand the language.


Perl 6 is designed to be a large, feature rich language that can be learned and used incrementally. The intent is that, like users of natural languages, Perl 6 practitioners gain fluency and nuance in their expression over time.

So, if you want to learn Perl6, don't worry about writing "baby talk" Perl 6. Write something that works and solves your problem. Later on you may learn that there are multiple other ways to express the same ideas you wrote already. Maybe one of them better expresses the heart of your algorithm, then you update your code.

Now, you may think, how the heck am I going to maintain code written by a bunch of people at different phases of learning Perl6?

The Perl community has responded to this challenge by putting a huge amount of work into writing excellent documentation. Perl6 docs are readily searchable and carefully indexed. It's a big, new language with lots to write about, so the docs are not yet complete, but they are already fantastic and keep getting better.

So, please, just pick a problem to solve and jump on in. The water's fine.


Also, errors are mostly pretty good and getting better. I think a big and overlooked part of learning a new language is the quality of the errors the language provides. Some do this better than others, and some get it right in the core language but by the time you've stacked up enough libraries and extensions to get things done nicely, the errors aren't so good anymore because they talk about things at a lower level than you're thinking of them (Perl 5 can fall prey to this latter problem, even though the core language has pretty good errors these days).


While I encourage OP to dive in and don't want to dismiss your points, in my experience life is more about perpetual intermediates: https://blog.codinghorror.com/defending-perpetual-intermedia...

As a result, a language like Perl is very nice and maybe even artistic but in many situations downright "dangerous".


Perpetual intermediates exist in every field, many of them with far more nuance than any mere programming language has to offer. The way to extract productivity from these people is the same in software development as it is in any endeavor: leadership.

Good leaders identify and propagate conventions that can keep their teams functioning in a complex millieu.

You can't get rid of the irreducible complexity of any system. You can only move it around.

In my experience, if you use an expressive language well, you can fit the code to the problem domain in a way that better communicates the underlying approach you have taken to solving the problem.

Relying on strict, small languages, like Java, means that you gain a consistency of syntax by sacrificing flexibility.

I've seen monstrous turds written in many languages, the problem in all these cases was poor or absent leadership.


That's understandable; if you look at the slides and see all of the integrated features in one big blast, you're going to think "holy moly this was not meant for mortal men".

In a way that's correct, because ultimately Perl6 is designed as the ultimate "kitchen sink" language; it has all the little features you could think of already baked in, which will include a lot of features you won't use.

The main reason for this stems from the overarching design philosophy; "There is more than one way to do something". The language ultimately tries to be as flexible as possible, going so far as to support modification the the core grammar, the object system, etc.

This is meant to make the developer as comfortable as possible, but it can lead the common case of perl-itis, also known as "write-once, read-nonce" code.


Another way to look at it is that it's designed to be a great "babytalk" language, the idea that everyone should be able to relatively easily learn enough of the language to easily get done what they need to easily get done. I'm not saying it's there yet -- doc is critical -- but that's part of the idea.

Conversely, whereas the P5 motto was TIMTOWTDI, the P6 motto is closer to TIMTOWTDIBSCINABTE, so there's usually relatively few idiomatic ways to do something.


Think of it as incorporating many of the features in all of the languages you mentioned above, but in one language that is designed to grow. The language can easily be enhanced both officially and by the end user. So not too many new concepts for you, just a different syntax.


> I learned ruby, perl5, python, lisp, forth, ml, ocaml, scheme, haskell, r, c#, java, lua, c++, factor, idris, asm, erlang, prolog, rust, d.

With all due respect, you didn't "learn" these languages. You got a superficial understanding of these languages. It's like someone who learns to say "hello, name is John, what is your name" in 17 human languages saying they learned 17 languages.

Being able to write "hello world" in all these languages doesn't mean you "learned" it.


I hate perl6. I hate it because I tried to get involved in the project early on, and it led me down the Haskell rathole. I don't know what Haskell looks like today, but a decade or more ago it was the hardest language to pick up that I had ever experienced. It was as if I had a solid background in latin languages and I was trying to pick up Chinese based on a handful of tutorials written by a tourist on the back of a napkin.

But it has been a decade and I am truly impressed with what it has turned into. Unfortunately, it has to re-gain mindshare as if it was starting from scratch. It might be a little bit harder actually, because there are a variety of scripting languages these days that are easy to learn, and there are still more than a few people who actively don't like Perl.

I really liked this slideshow: http://tpm2016.zoffix.com/#/

It gives a good review of Perl6 from early 2016. The video is an hour and a half, but it only takes a few minutes to scan through the slides and find the interesting pieces. (left and right arrow to navigate the slides)


The Haskell introductory landscape has definitely changed.

Learn You A Haskell for great good (http://learnyouahaskell.com) is an excellent introductory book to read to get started with Haskell.

A drier but more in-depth read is http://book.realworldhaskell.org/

Also, when you hit monads again, the best advice I can give you to understanding them (99% of Haskell guides out there seem to be about monads) is to take this path:

Functors -> Applicatives -> Monads

And check out this link: http://adit.io/posts/2013-04-17-functors,_applicatives,_and_...

I'm not sure those resources were around when last you tried haskell, but they're worth giving a read.

Haskell has M:N threading, and a strictness about types and semantics that's very refreshing to me. Yes, I could have written my most recent web project in some other language that would have been quicker (because it would have been simpler and likely more imperative), but I trust my codebase so much more with Haskell. Ironically it's my own understanding and clarity of thinking that I sometimes doubt now, but that's a good problem as far as I'm concerned, makes me think clearer.

[EDIT] - Can't believe I forgot, but Erik Meijer's series on Haskell is actually one of the first resources that actually got me really into it, it is absolutely fantastic, I actually watched the whole series before I read learn you a Haskell.

https://channel9.msdn.com/Series/C9-Lectures-Erik-Meijer-Fun...


I'm learning myself some Haskell (so I'm far from an expert, so take this with a grain of salt...), but AFAIK the current recommended introductory resources are described here: https://github.com/bitemyapp/learnhaskell

tl;dr: 1) If you want to buy a book, http://haskellbook.com/ is supposed to be good

2) If you instead want to learn from free online sources, a) cis194 spring 2013 followed by b) The data61 course (links in the github link above).


I am always curious why monad is important in functional language, isn't it accurate to say that monad is just a wrapper class?


In my opinion they're important to pure functional languages (like Haskell) because they're the abstraction that IO is isolated into (IO doesn't require monads, and monads don't always mean that IO is happening, but that's the abstraction they chose).

The monadic pattern is just extremely common (and desirable), here's what the wiki (https://wiki.haskell.org/Monad) says:

> Monads in Haskell can be thought of as composable computation descriptions. The essence of monad is thus separation of composition timeline from the composed computation's execution timeline, as well as the ability of computation to implicitly carry extra data, as pertaining to the computation itself, in addition to its one (hence the name) output, that it will produce when run (or queried, or called upon). This lends monads to supplementing pure calculations with features like I/O, common environment or state, etc.

If I were to try and paraphrase that, it's because it is such a good abstraction for all the things you have to do that aren't as simple as pure functions. In other languages, just about any function can do anything, so they're understandably less important


Perl6 is no longer implemented in Haskell on the backend. I think the old parrot VM was, but MoarVM is in C. On the bright side, a lot of cool FP concepts made it into P6 as a result of the original Haskell backend, so useful cross polination happened. Even better, you can do FP stuff in a much easier to understand way and you can also mix in OO when needed or relevant.


> Perl6 is no longer implemented in Haskell on the backend. I think the old parrot VM was

Parrot was implemented in C, and to my recollection implemented various iterations of bytecode syntaxes, notably PBC (parrot byte code).

Rakudo Perl 6 developers took the route of developing Perl 6 through an intermediary language NQP, which is a language aimed at making it easy to implement interpreted languages.

Rakudo (the implementation of Perl 6 that run on MoarVM) is implemented in NQP, and started out on Parrot through an implemention of NQP for Parrot.

NQP[1] stands for Not Quite Perl, and shares various features with Perl, but is much more constrained in features as capabilities, making it easier to implement in a VM.

Because it's mostly implemented in NQP, it's fairly easy to port to a new VM, as all you have to do is get NQP running on that VM, and Perl 6 is mostly ported. For example, this is how the Java port happened within a few months of announcement. NQP was implemented, and then 95% of Perl 6 was already working.

When a new VM instead of Parrot was desired, MoarVM was conceived of as a VM to natively run NQP (rather than NQP on top of the native VM bytecode). Porting Rakudo to it was relatively easy, since Rakudo targets NQP, not a specific VM.

Pugs, conversely, was an interpreter/VM for Perl back in the days of Parrot, and ran its own Perl 6 implementation directly (no intermediary language, and it wasn't Rakudo). It was writtem in Haskell.

Niecza was another direct implementation of Perl 6, but implemented in C# for .Net. I don't believe it used NQP.

1: https://github.com/perl6/nqp


Parrot was implemented in C, and to my recollection implemented various iterations of bytecode syntaxes, notably PBC

PBC was the bytecode format, the human readable low-level syntax was called PASM and the slightly higher-level synax was called PIR.

Besides PIR, other languages still close to the metal were NQP versions (though I don't remember the exact relationship between NQP/NQP-rx/parrot-nqp) and eventually Winxed.


Ah, I figured that part was slightly off (since I knew PBC was bytecode and thus my waffling with "bytecode syntaxes"), but really didn't want to research the history I had already forgotten. Thanks for the correction.

> Besides PIR, other languages still close to the metal were NQP versions (though I don't remember the exact relationship between NQP/NQP-rx/parrot-nqp) and eventually Winxed.

I vaguely remember NQP/NQP-rx etc as being branches with reimplementations or new features, but I could be way off. I remember Winxed, but I'm not sure I realized Whiteknight implemented it at the same layer as NQP instead of on top of it. That would explain why some other new language implementations started using it instead of NQP, which I recall (e.g. I think someone started up a new Ruby on Parrot implementation after Winxed that used Winxed). I miss reading Whiteknight's blog posts on massive changes he was working on.


Winxed was NotFound's brainchild, though I believe Whiteknight did push its usage.

The way I remember it, the issue was that later versions of NQP moved towards greater backend independence and no longer mapped directly to Parrot VM semantics (eg 6model), so Parrot had to rely on older NQP version, making Winxed an attractive alternative.


That's close to what happened.

The first relevant version of compiler tools were called PGE/TGE, written in PIR and based on the notion of attribute grammars and grammar engines. The first versions of NQP grew out of those.

I believe early versions of NQP were hosted in the Parrot repo, but they started to be developed out of tree and eventually were completely hosted out of the tree.

NQP was an improvement over PGE/TGE in many ways, so some of Parrot's internal tools were created in or ported to NQP.

Eventually (I believe with NQP-rx), NQP forked into an incompatible version out of tree. As you mention, when the next incompatible revision of NQP was being planned, it was no longer an attractive target for Parrot, because it was:

  * developed out of tree
  * developed out of Parrot's release cycle
  * backwards incompatible
  * "backend independent"
In particular, migrating existing, working Parrot tools to a new language which offered little beyond busywork to gain "backend independence" wasn't worth the effort. (Having NQP go unsupported for existing tools wasn't great either, especially given that one explanation for "backend independence" was that Parrot tools weren't evolving fast enough to support Rakudo features -- still feels like a circular argument.)

I still maintain that it made little sense for Parrot to depend on an external project, evolving rapidly, offering little stability, with the express intent to obviate Parrot.


> Winxed was NotFound's brainchild, though I believe Whiteknight did push its usage.

That's interesting. I'm not sure why I remembered it so strongly as Whiteknight's invention, but it probably has to do with how much he talked about it, and that my initial exposure to it was through his blog.

> The way I remember it, the issue was that later versions of NQP moved towards greater backend independence and no longer mapped directly to Parrot VM semantics (eg 6model), so Parrot had to rely on older NQP version, making Winxed an attractive alternaive.

That sounds likely. The whole situation was a clusterfuck. To my knowledge, some people are still really upset about it and how it played out (but at this point I don't really see how it could have gone any other way).


Thanks! I confused Parrot with Pugs. I wasn't watching it at that point, so all just a history lesson to me. I appreciate the correction!


I looked quickly through the slides and saw that perl has now grammar build in the language...

Truly an interesting feature


It is. You can build recursive descending parsers with a declarative syntax, and all the benefits from having it in OO (role composition, inheritance).

Here is a working grammar for INI config files: https://github.com/tadzik/perl6-Config-INI/blob/master/lib/C...


Is the language itself a good place for parsing/grammar handling?

I could not even find in the standard documentation which kind of parsers it supports, is it LR(k)? LL(k)? Any CFG? What parsing method does it use: recursive descent, shift-reduce, something more general like CYK/Earley algorithms? How does it handle ambiguities?

It turns out that `grammar` keyword generates recursive descent parsers and anything even a bit fancier still requires a dedicated parsing library.


> It turns out that `grammar` keyword generates recursive descent parsers

Yes, P6 grammars generate recursive ascent/descent parsers.

(It allows arbitrary mixes of ordinary P6 closures, using the 'Main' DSL's syntax, with parsing P6 closures, using the 'Regex' DSL's parsing syntax. Purely declarative parts of the latter are true regular expressions and are mapped to automata by the compiler.)

> and anything even a bit fancier still requires a dedicated parsing library.

Huh?

Please explain what it is about the Perl 6 grammar (the grammar for Perl 6 that is itself a Perl 6 grammar) that isn't fancy.

In my recent SO post [1] I wrote "In most cases, the practical answer when you want to parse anything beyond the most trivial of file formats -- especially a well known format that's several decades old -- is to find and use an existing parser.". Is this what led to your "bit fancier" conclusion about P6 parsing?

> Is the language itself a good place for parsing/grammar handling?

Yes.

Not only does this allow devs to more easily write and use parsing code, just as P5 made it easier to write and use regexes, but it also makes it easier to mutate Perl 6 and write DSLs.

[1] https://stackoverflow.com/questions/45172113/extracting-from...


perl6 grammars have to be seen in context of perl5 regexp.

Basically, regular expressions have grown to impossible to understand, write and modify. Mostly due to perl.

Yet, they are used daily by millions of developers, because they are a ton better than manual character checks.

Perl6 grammars are basically an attempt to bring developers another step forward.


Are you saying that Perl6 grammars are a replacement (maybe eventually) for Perl5 regexps?


partly, you still build Grammars out of "token", "regex" and "rule" definitions, but some things are more natural with one or the other.

So, if you are doing "test if string has whitespace" you'd use a regex, while for "parse an apache log line" you'd probably use a grammar instead of a very nasty regex.

IANA perl6 dev though, just a casual observer.


> while for "parse an apache log line" you'd probably use a grammar instead of a very nasty regex.

That's actually a fairly easy endeavor with perl regex capabilities, especially named captures. Parsing HTML, or a programming language, or other much more complex structures is much easier with a grammar. For example, JSON parsing[1].

That said, there are ways to approximate some features of grammars with Perl5 regexes[2].

1: https://github.com/moritz/json/blob/master/lib/JSON/Tiny/Gra...

2: http://blogs.perl.org/users/brian_d_foy/2013/10/parsing-json...


Thanks, riffraff and kbenson.


There is an old (infoq? I think) presentation with Damian Conway where he calls it a new paradigm "grammar based programming" where you write a grammar parser object easily using the built-in functionality and then add a little code to get with it and BAM you're finished. I think this elegantly solves quite a few programming tasks of mine.


FWIW, I, too, have tried learning Haskell and ended up banging my head against the wall over and over. With Lisp, I eventually "got it", but Haskell has evaded my attempts at understanding it with impressive stubbornness[1].

Disclaimer: I do not hate Haskell, I just don't "get it". If you love the language, fine, have as much fun with it as you possibly can. I for one have given up, at least for now. Maybe I'll try again next year.

[1] NB that the word "stubbornness" has three pairs of double consonants. How fun is that?


Haskell's rough to learn because it hits you with purity, category theory inspired typeclasses, and laziness all at once. It's natural that people struggle with this (I sure did).

If you feel like trying a pure language again someday you could consider PureScript (which just has the first 2 things above) or Elm (which just has the 1st). If you do try I'd like to hear how it goes, email in profile.


My main problem with Haskell was not the pure, funcational part. It was the side-effect-laden part.

If I were ever to go there again, I would probably give OCaml a try. It seems to offer many of the benefits of Haskell, while also offering "a way out" into the familiar world of objects and side effects. But don't hold your breath, I don't think I will get there this year.


What do you mean by the "side-effect-laden" part? Did you find yourself writing lots of side effecting code because of Haskell (which seems weird)? Or Haskell just wasn't a good fit for a project that was full of side-effecting code?

If it's the latter, I can definitely tell you that part of the zen of Haskell is firming up the boundaries between side-effecting code and pure code -- for example, IMHO the better you get at Haskell, the less `do` statements show up in your code (for better or for worse, because >>=, <*>, and co make it harder for newcomers to enter a codebase)


Writing a purely functional function like, say, factorial, or the Ackermann function, was no problem at all.

Writing a "purely" side effect-based one, say, read a string from stdin, parse an age, and say, "What, you are %d years old?!?! Wow, you're old!" completely eluded me.

Maybe it was just that all the tutorials I encountered sucked. Maybe I am just too dumb for Haskell. Given that I currently really love Go, I kind of suspect the latter. But who knows? I'll find out next year, I guess. ;-)


I felt the same for a long time. At first glance something like

    import Text.Printf

    main :: IO ()
    main = do
        age <- readLn
        print (formatAge age)
    
    formatAge :: Int -> String
    formatAge age = printf "What, you are %d years old?!?!?! Wow, you are old" age
might look nice and imperative. But then you realize it is just syntactic sugar for `main = readLn >>= printf ...`, which is incredibly confusing when coming from another language. And then you learn that libraries can just invent their own meanings for `>>=` as long as the type signatures match and all bets are off. The upshot of this is that the code like the one above is easy to test and could be async or parallelized without issue. The downside is that people usually fail several times before finally getting it.

The haskell book[1] is apparently really helpful with this but it also costs 60$ and is over a thousand pages so it's a pretty large investment in both time and money.

[1]: http://haskellbook.com/


> The haskell book[1] is apparently really helpful with this but it also costs 60$ and is over a thousand pages so it's a pretty large investment in both time and money.

For a really good book on programming, I'll happily pay 60 bucks. I will put that on my Christmas wish list, and give it another try next year.


So yeah, I definitely get that. I'm no Haskell master, but I found that my code progressed this way (assuming I was trying to do what you mentioned":

  main :: IO ()
  main = do
  putStrLn "Please input your age:"
  age <- getLine 

  when (age < 10) (putStrLn "..." ++ (show age))
  when (10 =< age < 40) (putStrLn "..." ++ (show age))
  when (age >= 40) (putStrLn "..." ++ (show age))

But as you get more comfortable, you get something like:

  type Age = Int

  -- Imagine there are constants here for youngAgeMessage, middleAgeMessage, and oldAgeMessage, 
  -- for simplicity they're just strings, though you could add even more descriptive aliases 
  -- like "OldMessage" with a quick union type like AgeMesssage = YoungAgeMessage | MiddleAgeMessage | OldAgeMessage, etc etc

  makeAgeMessage :: Age -> String
    makeAgeMessage age
    | age < 10 = youngAgeMessage ++ show age
    | 10 =< age < 40 = middleAgeMessage ++ show age
    | age >= 40 = oldAgeMessage ++ show age

  main :: IO ()
  main = putStrLn "Please enter your age:"
         >>  getLine
         >>= putStrLn . makeAgeMessage . read

This isn't even the final form of this code either, there are some more things you could do to make this code more axiomatic haskell. This code is approximate (like you probably can't copy and paste it, probably won't compile) but should at least show what I mean.

It's similar to early clojure and the use (and eventual love, usually, of the threading macro "->") and the unix philosophy -- once you start writing those clean functions that pass whatever they need right along, it starts getting easier (and more desirable) to pluck out the parts that don't have to be side-effecting.

Also, the type system expresiveness is just amazing -- it's what Java should have been but never got the chance to be:

  data Thing = OneThing | AnotherThing | ThirdThing
Guess what a `Thing` can be? literally just those things, not even a null or anything. A lot of people say they really love/need/only use "strongly typed" languages (which means a ton of things to a ton of people), but the biggest slap in the face to me is how a language like Java (that people will reach for when they want to compare some usually weaker language to a "strongly typed" one), is a literal minefield of Null Pointer Exceptions (NPEs). "Anything can be anything or sometimes nil" is a hard pill to swallow once you've used Haskell. Also, trying to use Option<> everywhere feels wrong if you do it in Java, because it starts to bleed everywhere, but actually... it's absolutely right (IMO) -- imagine how much better java code could be if the default was to Option.map over things, and the second you decided to try and pull a value out, you knew you were opening yourself up to something bad, instead of just passing things around and hoping they're there or writing repetitive checks.

Now, if you see a `Maybe Thing`, you instantly know that that thing is either JUST the thing, or it's Nothing, and maybe's type is:

  data Maybe a = Just a | Nothing
Type classes are also amazing, they're basically just as ergonomic as Go's interfaces, without some of the weird hangups and interface{}

All that said, I definitely get the hangup, Haskell is difficult to get started with, even to this day, but man, that hurdle is so worth. Maybe I'm just addicted to high-learning-curve things but Haskell feels good.


This is a great comment outlining some of Haskell's strengths. Definitely, ADTs should exist in every language claiming their static type system as a feature.

Just one thing that stands out to me:

> "Anything can be anything or sometimes nil" is a hard pill to swallow once you've used Haskell.

In a language like Java yes, nearly every type is inhabited by null. In fairness, a similar problem exists in Haskell: every type is inhabited by "bottom" which is the computation that never terminates. This is a side effect of being lazy by default.


Hey is that truth affected by the talk surrounding enabling strictness with a flag in Haskell? I don't know much about the actual progress/implementation (mostly because I haven't had too many problems with laziness yet)

I've heard that Idris is basically Haskell but strict by default -- I've never tried it but maybe I should? I'm hesitant to go to a language with even less libraries/support/mindshare than Haskell though


I totally understand your frustration with Haskell.

I understand the tutorials and have read though the books available, but when I try to use the language on a small real project, say a 2d barcode generator, I have trouble getting off the ground. Its not even clear to me which set of features are recommended for the language. It seems like different users program in different, incompatible, languages! Here is a list of optional language extensions supported by GHC:

Cpp

OverlappingInstances

UndecidableInstances

IncoherentInstances

UndecidableSuperClasses

MonomorphismRestriction

MonoPatBinds

MonoLocalBinds

RelaxedPolyRec

ExtendedDefaultRules

ForeignFunctionInterface

UnliftedFFITypes

InterruptibleFFI

CApiFFI

GHCForeignImportPrim

JavaScriptFFI

ParallelArrays

Arrows

TemplateHaskell

TemplateHaskellQuotes

QuasiQuotes

ImplicitParams

ImplicitPrelude

ScopedTypeVariables

AllowAmbiguousTypes

UnboxedTuples

UnboxedSums

BangPatterns

TypeFamilies

TypeFamilyDependencies

TypeInType

OverloadedStrings

OverloadedLists

NumDecimals

DisambiguateRecordFields

RecordWildCards

RecordPuns

ViewPatterns

GADTs

GADTSyntax

NPlusKPatterns

DoAndIfThenElse

RebindableSyntax

ConstraintKinds

PolyKinds

DataKinds

InstanceSigs

ApplicativeDo

StandaloneDeriving

DeriveDataTypeable

AutoDeriveTypeable

DeriveFunctor

DeriveTraversable

DeriveFoldable

DeriveGeneric

DefaultSignatures

DeriveAnyClass

DeriveLift

DerivingStrategies

TypeSynonymInstances

FlexibleContexts

FlexibleInstances

ConstrainedClassMethods

MultiParamTypeClasses

NullaryTypeClasses

FunctionalDependencies

UnicodeSyntax

ExistentialQuantification

MagicHash

EmptyDataDecls

KindSignatures

RoleAnnotations

ParallelListComp

TransformListComp

MonadComprehensions

GeneralizedNewtypeDeriving

RecursiveDo

PostfixOperators

TupleSections

PatternGuards

LiberalTypeSynonyms

RankNTypes

ImpredicativeTypes

TypeOperators

ExplicitNamespaces

PackageImports

ExplicitForAll

AlternativeLayoutRule

AlternativeLayoutRuleTransitional

DatatypeContexts

NondecreasingIndentation

RelaxedLayout

TraditionalRecordSyntax

LambdaCase

MultiWayIf

BinaryLiterals

NegativeLiterals

DuplicateRecordFields

OverloadedLabels

EmptyCase

PatternSynonyms

PartialTypeSignatures

NamedWildCards

StaticPointers

TypeApplications

Strict

StrictData

MonadFailDesugaring

I've seen different subsets of these extensions recommended. Is there some canonical subset recommended by anyone doing real work in Haskell?

Even more off topic: in reference to your note about the word "stubbornness", the word "bookkeeping" has three consecutive doubled letters. I believe that there are no examples of four consecutive doubled letters in any ordinary English dictionary, but I ought to grep the longer lexicons I've got at home.


> It seems like different users program in different, incompatible, languages!

None of them are incompatible as far as I am aware. If they are, I'm sure it's in very minor ways that don't occur in practice.


I'm sorry, I don't understand or maybe I wasn't clear. These extensions change the meanings (semantics and syntax) of Haskell programs accepted by the compiler. A program which would normally terminate using Haskell's ordinary lazy semantics might not terminate when using the Strict extension that changes the default semantics to less lazy.

As far as I can tell from blogs like [1], the other language extensions also change the language in ways that are not forward and backward compatible with out-of-the-box Haskell.

[1] https://ocharles.org.uk/blog/


You were perfectly clear, but just mostly incorrect, I think.

You are correct that Strict is a language extension that significantly changes the semantics of a Haskell program. It's a new extension so it didn't cross my mind. But besides Strict the number of extensions that change semantics of existing programs in a significant but non-forward and -backward compatible way is very, very small.

Basically, the summary of Haskell extensions is:

Around half of them are new features that should be in the language standard but aren't because a new standard hasn't been produced for years. Around half of them extend (not change) the language in powerful ways and are best avoided unless you really need them. Around Ɛ% are things that significantly change the semantics, and 0% are things that mean you're programming in a "different, incompatible, language" (all extensions are compatible across module boundaries).


Thanks, I won't give up on Haskell yet. You make a good point that should have been obvious to me: the extensions are confined to the module boundaries.


Good luck on your Haskell journey! I hang out on https://www.reddit.com/r/haskell/ so feel free to come over there if you have any questions.

(And to add some precision, some extensions may be visible across boundaries, but almost all are not, and the extent to which they are is very small).


> and it led me down the Haskell rathole.

was this through pugs? To this day, I think the days of pugs development and "commit bit to everyone" where the most fun I've seen in open source.


It was pugs, and it was pretty fun.


Crystal (http://crystal-lang.org) has fibers and channels. Current implementation isn't multi-threaded but it's being worked on. So you'll get that concurrency + nice syntax + types + a wonderful stdlib.

EDIT: as seen in other responses, options abound: Python 3, Elixir, D, Dart and more all have built-in concurrency primitives, not to mention that discarding nodejs because of "callback hell" is at this point laughable.


Not sure about channels but D has fibers. It is one reason / benefit of why vibe.d[0] was made.

[0]: http://vibed.org/features

Note: Vibe.d is a web frameworks as well as a networking stack for D. If you're even remotely curious about D or do any web development I recommend you check out Vibe.d it is quite impressive (at the very least to me).

Edit:

There's apparently jin.go / Go.d:

https://github.com/nin-jin/go.d


I am really,really excited about Crystal. I wish it success but I am afraid if the rails shops and big companies do not start using it it will be too little too late.


That would be unfortunate. I really like Crystal as a fast, typed alternative to Ruby, but I also prefer Ruby as a language to Go or even Elixir, which not everyone does.


> Why is this important? Concurrency is hard and if you want M:N thread multiplexing (i.e. WEB SCALE CODE, where application threads aren’t pinned to pthreads) your options today are precisely Erlang, Go, .NET, and Perl 6.

I guess the author doesn't know about Haskell? The concurrency story for Haskell is great, and using the right library, you can literally just define types for the routes for your backend and then ask it to generate JS for your frontend.


By "right library" do you mean Servant? (I'm interested.)


Here's an overview I wrote a while back.

https://news.ycombinator.com/item?id=14149200


Yep. Servant is great.


Does Haskell have thread multiplexing?


Yes, as well as excellent abstractions for putting those threads to work. See [parallel](https://hackage.haskell.org/package/parallel), [async](https://hackage.haskell.org/package/async), and [monad-par](https://hackage.haskell.org/package/monad-par) for a few examples. If you just need a web server [Warp](https://hackage.haskell.org/package/warp) is extremely competitive.


HN doesn't support full Markdown. :(


I think you're thinking of CommonMark. That link format isn't valid in canonical Markdown either (just to some of the common variants that also went by "Markdown").

I was always partial to the variants that could do citations well. They would automatically link-ify items postfixed by [1] etc with the link included at the citation at the bottom. I implemented a customer facig newsletter system at one job using that, where it automatically made the emails multipart when markdown was submitted, with the text part being raw Markdown. When using citations, it's still very readable.


I thought CommonMark was the attempt to standardize Markdown. Besides that, every markdown implementation I've touched since before CommonMark supported [text](url) format.


Actually, I'm totally wrong. I thought this wasn't part of the original spec for some reason, but it's totally in Gruber's list of features and works in Markdown.pl, I just missed it when checking to confirm originally (I saw automatic links and missed the other links feature).

Thanks for causing me to look again (I used babelmark[1] to check what versions supported it and noticed all of them do, including Markdown.pl). I dislike putting out wrong information. :/

1: http://johnmacfarlane.net/babelmark2/?text=%5Btext%5D(url)


That's the way I format my bigger HN comments. Didn't know [this](wasn't) part of the Markdown standard, though. Thanks!


Actually, see your sibling comment. I was incorrect. That is a standard Markdown feature.


Yep. “forkIO” creates a lightweight thread. There are also things like “forkOS” to create a heavyweight thread and “forkOn” to pin a lightweight thread to a particular capability (core, more or less).


edit: dumb snark, but the replies are useful


No need for the snark. This isn't fanboyism, though I gladly admit that I'm fond of Haskell. I'm not just namedropping Haskell "just because". He claims those languages are the only one that have green threads. He's wrong. I corrected that.


You're right, that was overly snarky, but your post does come off as use Haskell -> magic.

The concurrency stories for Erlang and Go are front and center, very well defined, and obvious. How do you get similar functionality in Haskell (specifically, what libraries do what you are talking about)?


"The concurrency stories for Erlang and Go are front and center, very well defined, and obvious."

GHC Haskell basically have the same concurrency story (since ghc 7.0.1 - 16 November 2010 [1]):

- You can start ten or even hundred thousands of threads

- Inside them you can program like having synchronous I/O

- And the runtime handles it efficiently (with epoll/kqueue/...)

The difference is that haskell is less opinionated on the high level concurrency primitives, so you have many choices (sometimes with multiple different library implementations) like: locking, explicit async code, transactional memory, channels... https://hackage.haskell.org/packages/#cat:Concurrency

[1] Release Notes: "On POSIX platforms, there is a new I/O manager based on epoll/kqueue/poll, which allows multithreaded I/O code to scale to a much larger number (100k+) of threads" https://downloads.haskell.org/~ghc/7.0.1/docs/html/users_gui...


The library I was referring to is called servant[1]. It's a library for writing backends and clients in Haskell. I mentioned it because he said something about "webscale", but it doesn't have anything to do with concurrency.

The go-to library for nice Haskell concurrency is probably async[2], which you can combine with stm[3] for very nice guarantees. If you're interested, there is a very nice, free book[4] on the subject.

[1] https://haskell-servant.github.io/ [2] https://www.stackage.org/package/async [3] https://www.stackage.org/package/stm [4] http://chimera.labs.oreilly.com/books/1230000000929/index.ht...


I have no idea where this incredulity even comes from you tools, nor the claim from TFA which was patently absurd. Pretty much every fucking major language not listed has green threads if not built into the core (Python, Ruby) provided as a well known library (Java, C++). And any new language it's pretty much table stakes.

The article claim is so ridiculous it's not even wrong and you go on to snark about a Haskell counterexample. What a complete asshole.


Without a "Major" sponsor, or a "Rails" type killer application, Perl 6, or indeed any language, will struggle to get significant traction.

It is of course possible to gain slowly over time, but with numerous languages competing, that path may be a dead-end.


I agree here. If Perl 6 however gets anything close to Perl 5's Mojolicious it would be more than enough to switch to it. I haven't used Perl in many years, but it's by far the best web framework I've ever used. It's really thought out well, made all the right design decisions from the beginning and just really great code itself.

But such gems tend to be unique, also in completely different fields. And it probably wouldn't have been possible if the original developer hadn't been involved and gained experience through Catalyst, which is roughly Perl's take on Ruby on Rails.

If it got a bit more press in its early days, outside of the Perl community I think the world of web development could look very different today.

As I said I am not a Perl developer anymore (and don't focus on the web part anymore either), and I am sure there is a bit of nostalgia in those lines, but even though there are great frameworks in other languages, no doubt, there is impressive projects, I still didn't get something that allowed such a kind of rapid development and transitioning prototype applications into production services in such a seamless manner.

I think it's also a great example of Perl being very readable despite all the ugly scripts you find out there - of course also because a big bunch of it being written by non perl developers who just needed to get a script going. Just like with JavaScript and PHP.

It's funny though how the presence of a well done web application framework nowadays is considered the killer application in a pretty consistent way. I've seen that in various languages that weren't really made for web application programming as well. There is obvious reason, as it's the agreed upon API for most applications, however I wonder if this might not bring stagnation and kind of a cycle that only allows a certain way of thinking and developing to succeed, as standards and programming languages/frameworks co-evolve.


Perl 6 still has to overcome Perl 5, which is everywhere and is a great language. I never get around to using Perl 6 because 5 is already there in every Linux distribution.


I wish Linux distros shipped with more recent language versions, e.g. Python 3.6 and Perl 6 in addition to just 3.4/5 and 5. I understand that it's not their job to try and push people to update their language skills, but it'd be nice to have access to them right off the bat if they feel like it.


The difference between Python minor versions (e.g. 3.4 to 3.6) is tiny compared to the difference between Perl 5 and 6.

From what I've seen, the difference between Python 2.x and 3.x is tiny compared to the Perl 5/6 change.


I think the difference between Perl 5.10 (2007) and 5.26 (2017) is bigger than difference between the Python 2 of 2007, and Python 3 of 2017

The difference between Perl 5 and Perl 6 is probably about 10x the difference between Perl 5.10 and 5.26

I like to explain it this way:

Perl 4 is to C

as Perl 5 is to C++

as Perl 6 is to Haskell/Smalltalk/C#/Swift/Julia…


Then again I've got Perl code I wrote in the 90s that still runs untouched on the latest Perl 5. Much more recent Python 2 code (of mine) still won't run untouched under Python 3.

So I might agree actually on the "bigger", but the "noticeability" does have a lot to do with the general feeling.

(to be clear, I agree with all of your comment)


I'm not extremely familiar with Perl 6 to be able to draw a comparison.

From what I know about Python 2/3, it's that you can write maybe 90% on average/in general of your code the same way without much thought about the differences. However, if you do wish to adopt the new features, there's plenty to keep you busy writing scripts for at least a few months, maybe even a year.


Perl 6 is a different language from Perl 5. It's considered next in the "Perl family of languages". There are various things you can do to get interoperability between them, and run code from one in the other's interpreter, but the changes are much more pronounced that Python's 2->3 changes. It's closer to the difference between PHP and Perl 5. People familiar with Perl 5 will find a lot of the design decisions and concepts similar in Perl 6, but it will still be like learning a new, but similar, language to Perl 5.


python3.x vs 2.x is nowhere near the jump from perl5 to perl6.

It's basically a different language that shares some things, with few users. It would be nice to see it shopped, but you might as likely wish distribution shipped with crystal or nim.


Hey, it was a wish lol. I know we live in the real world here



Red Hat 7.3 ships perl 5.16.3 - it's from 2013. 6.9 ships perl 5.10.1 from 2009.


    $ cat /etc/redhat-release
    CentOS Linux release 7.3.1611 (Core)

    $ yum info rakudo
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * base: mirror.team-cymru.org
     * epel: mirror.oss.ou.edu
     * extras: mirror.cloud-bricks.net
     * updates: mirror.atlantic.net
    Available Packages
    Name        : rakudo
    Arch        : x86_64
    Version     : 0.2017.04.2
    Release     : 1.el7
    Size        : 3.4 M
    Repo        : epel/x86_64
    Summary     : Perl 6 compiler implementation that runs on MoarVM
    URL         : http://rakudo.org/
    License     : Artistic 2.0
    Description : Rakudo Perl 6, or just Rakudo, is an implementation of the
                : Perl 6 language specification. More information about Perl 6 is available
                : from <http://perl6.org/>. This package provides a Perl 6 compiler built for
                : MoarVM virtual machine.


Read the article, very informative. Pleasantly surprised to find out that it doesn't mention at all language traits like syntax, but focuses on the features of Perl6's underlying VM, specifically MoarVM.


Indeed, a GILless bytecode VM is worth a look.


GIL less yes, but not lockless. The previous perl6 VM, parrot, had lockless threads, scaling linearly per native core. MoarVM even needs to lock on hash or array accesses, which is certainly not state of the art.


How did parrot write arrays from multiple threads without a lock somewhere in there?


You deferred updates to the owner via scheduler, with high prio, and continue. parrot threads are lock free but not wait free. The NQP compiler needed to know about threaded writes and schedule the update, with a semaphore, while the other threads continue.

What MoarVM got better was everything else. The GC, the calling convention, the OO (6model), the jit. I was on my way to fix all the parrot damage done from the previous decade, because this would have enabled all architectures and proper threading for perl6, but then perl6 decided to kill it and go with Moar. And spread a lot of lies on threads.


I fail to see how using a semaphore to schedule work in another thread is "lock-free."


Because you wait but you don't lock. And the wait is not 100x of an instruction, normally it's the same time as for one single instruction. Writes are fast and atomic, because there's no lock. Only the writer needs to wait a bit, not any other thread.

Google for lock-free vs wait-free


The parrot threads didn't communicate with each other and you could wait until all of them finished. They provided a tiny subset of the functionality real threads offer but in a more efficient way.


Well, it should have mentioned Perl6 uses rational number math instead of floating point. I think this is a big advantage, since you can do more precise calculations using just the core language.


Maybe it's because I'm old, but I don't see the appeal of M:N multiplexing in a programming language (i.e. 'green threads' or some other user-level context switching)

For long-running tasks, if they are I/O bound you can use non-blocking I/O and event loops. If they are CPU bound, then use threads or separate processes. The two techniques can be combined to scale well across multiple cores.

The OS is designed to schedule workloads, it has decades of development in doing this, and has all the system-wide information needed to schedule tasks well across the CPUs. why re-implement the wheel in your programming language?


> For long-running tasks, if they are I/O bound you can use non-blocking I/O and event loops. If they are CPU bound, then use threads or separate processes. The two techniques can be combined to scale well across multiple cores.

That combining is what M:N threading is. Green threads are application level threads. "threads" are OS level threads. M:N threading is the ability to take M application level ("green") threads, and schedule them on N actual threads.

> why re-implement the wheel in your programming language?

Because for some things, OS level threads are far too heavy. For others, application level threads don't achieve the parallelism required to efficiently use the hardware available. Combining them should allow scaling well across multiple cores, as you noted yourself. The difference here is that they are part of the language, which means they can be easily and efficiently used without setup by everyone using the language.


>> For long-running tasks, if they are I/O bound you can use non-blocking I/O and event loops. If they are CPU bound, then use threads or separate processes. The two techniques can be combined to scale well across multiple cores. >That combining is what M:N threading is. Green threads are application level threads. "threads" are OS level threads. M:N threading is the ability to take M application level ("green") threads, and schedule them on N actual threads.

More or less. I would say that manually threading continuations through async calls doesn't count as M:N. Not even when done via semi-automatically via the stack-less coroutines which are popular on recent languages as they are a very leaky abstraction.

I would even argue that a proper M:N system should have, if not full preemption, at least a best effort attempt at guaranteeing forward progress by implicit insertion of yields.


You can call Thread.yield() at any time.

They are preemptive, and many things you do which will block effectively call Thread.yield() to give other things a chance to run.

I once wrote some code which took my 4 core CPU to about 500% utilization, and it was very simple code. (on a hyperthreaded Core I7) It was purposefully written to be fairly daft, as there were plenty of better ways to write it. I just wanted to see what would happen, and how long it would take to complete.


Edit: I mistakenly read "OS-level threads" again where s/he said "application level threads" in the quote.

> For others, application level threads don't achieve the parallelism requires to efficiently use the hardware available.

You've got this backwards: OS-level threads execute in parallel (or, rather, may be executed in parallel), while green threads spawned within the same OS-level thread execute sequentially on a single processor core (but in no predetermined order).


I think you missed where I linked "application" threads to "green" threads, as opposed to OS level threads.


My apologies; I read your comment too quickly.


My point is that 'green threads/application level threading' is, IMO, not very useful in a language. As you say, it's the bit where putting them on actual threads is a win. A language that does its own green threads or co-operative multitasking is trying to do the kernel's job, and it will never be as good.

It's at the purported 'web scale' in the article (I'm guessing this means lots of tasks, or lots of work) that green threads least useful, because you know there that the workload will be big enough to keep all CPU cores busy, and multi-threading/multi-process is always going to be a win. Adding green threading to an already maxed out computer is just extra overhead for no benefit.


You shouldn't look at it just in terms of how well it can utilize system resources. It allows for language level concurrent programming support which facilitates different programming styles. Some people find this style of programming 'fits' them and are attracted to these languages as a medium for expression.

So while other languages can use threads and async-IO perfectly well to keep all a systems resources utilized, that is not the only point of a programming language. Programming languages are primarily a means of communication.


it turns out that IO can dominate. When this happens, having green threads helps, because CPUs that would otherwise pend on IO availability will instead be freed to work on other tasks.


M:N threads exist because the operating system threads are too expensive (despite decades of development, OS threads operated at the wrong abstraction layer). If operating system threads scaled reasonably well, there would be (virtually) no need for M:N threads.


I agree that they can be too heavyweight for short-lived work. e.g. a program needs to sort a small(ish) amount of data, it would be great if the language could make the sort utilise all the available cores on the machine, without the base program needing to do any kind of multi-process or multi-threading. Spinning up threads to do this kind of thing could be too slow to be of any benefit, but you could use thread pools to save on the startup costs. This is the kind of problem that OpenMP tries to solve.

But for long-lived workloads, or lots & lots of tiny requests, (the OP talks about 'web-scale', whatever that is), you would be creating the processes and threads once, at startup, and then they just all keep busy with little overhead.


> it would be great if the language could make the sort utilise all the available cores on the machine.

I'm not sure how I feel about this, because a single program doesn't own the machine. I don't necessarily want a language to automatically use all the cores any more than I want the language to automatically use all my memory or bandwidth.

I think the GOMAXPROCS approach kind of makes sense. You can spin up as many serial independent tasks as you want, and go will use up to GOMAXPROCS threads.


You just described M:N threading, and what's being touted in this article as good about Perl 6.


Not really - no :N (green threads/co-operative multitasking) needed here. It's one task that wants to use multiple real CPUs, instead of multiple tasks that want to use one CPU.


I was referring two your second paragraph but had missed your reference to the OP, which makes it obvious what you are trying to say.


Sorry if I wasn't very clear. I'm kind of arguing against one side of M:N language automation, and for the other side of it.


i.e. SIMD i.e. something like:

    @array = 1, 2 ... 1,000,000 ;
    @array>>++ ;
where the `>>++` operation increments each of a million integers with the workload distributed across multiple real CPUs?


Yes, or perhaps more generally, any loop where the compiler/runtime can infer that each iteration of the loop is independent of the others, and so they could theoretically run on multiple cores.

It's SIMD, exactly as you say, but potentially on much larger blocks of code than single arithmetic operations. A compiler that can pick where to use SIMD is doing the same kind of analysis. The difficulty in doing it at compile time is knowing whether or not it will be worth the cost.

In your example, the compiler could see that the array has a million entries and so spreading the work across threads might be a win. But in general, the compiler probably can't tell roughly how many iterations a loop is going to have, so it doesn't know where to thread and where not to.


Right.

In P6 (potential) parallelism is explicit. You have to use an explicit construct such as the `<<` or `>>` meta operators.

I used a simple arithmetic operation but it can be any code. Of course, it had better be parallel safe.

The code-gen at compile time makes an AOT call on whether to attempt parallelization. If the JIT thinks that's not working out during a particular run it may decide to de-optimize.

> more generally, any loop where ... each iteration of the loop is independent of the others

Right. The `<<` and `>>` meta operators aren't the only relevant features. For example there's a `.race` method that not only (potentially) parallelizes a map, grep or whatever, but provides further explicit control such as batching size.


True, but combining them can be a pain. My current codebase in C++ has both posix threads (std::thread) and libuv event-driven code. They're both fine, but it's a nuisance whenever they intersect. I can't drop libuv, because there's a nodejs VM inside. I can't drop std::thread, because some other libraries that do IO run synchronously.

A universal M:N threading system with coroutines underlying everything means that both styles of code can happily coexist.


> For long-running tasks, if they are I/O bound you can use non-blocking I/O and event loops. If they are CPU bound, then use threads or separate processes. The two techniques can be combined to scale well across multiple cores.

One might suppose this to be the case, but to my knowledge only network IO is truly non blocking, although there are async io api's in some languages for disk reads/writes, they're still blocking behind the scenes. One of the coolest things about go (imo) is it's scheduling of many go routines onto relatively few machine threads (and processes) for blocking IO. I'm not clear if Perl 6 has such a facility.


I'm not an expert, but I think you're right for POSIX.

Windows does apparently offer the necessary building blocks; this is the most relevant HN sub-discussion: https://news.ycombinator.com/item?id=9584269


aio - POSIX asynchronous I/O overview

http://man7.org/linux/man-pages/man7/aio.7.html


Maybe not quite there yet?

https://lwn.net/Articles/724198/ (May 30, 2017)

In truth, an AIO operation can block in the kernel for a number of reasons, making AIO difficult to use in situations where the calling thread truly cannot afford to block. A longstanding patch set aiming to improve this situation would appear to be nearing completion, but it is more of a step in the right direction than a true solution to the problem.

In other words, AIO seems likely to remain useful only for the handful of applications that perform direct I/O to files.


In my limited knowledge the only app I've seen use aio is oracle database.

If it works in the sense that the kernel supports the API and implements aoi internally using async mechanisms, then that would be awesome. Is that the case?


Perl6 (with the MoarVM backend at least) should have proper async IO on both network and file IO.

jnthn (one of the primary contributors to MoarVM) has a write up about recent work done to improve the system here: https://6guts.wordpress.com/2017/06/08/sorting-out-synchrono...


Yeah, non-blocking disk I/O is painful. I've never had to write code that did enough disk reading/writing to try to parallelise it, so I've never researched the solutions that much. What I don't understand is why disk I/O should require a different API to be non-blocking. After all, network I/O can be blocking or non-blocking with the same read() and write() calls that disk I/O uses.

There must be some reason why no OS offers non-blocking disk I/O in this way, but I don't know what it is.


The short version is that disk I/O is integrated with the page cache with all that entails, and allows things like seeking.

To an extent this is a design rooted in a world where disks are much faster than network access. But perhaps that world is coming back with SSD's/NVME/etc..


NT has asynchronous I/O for disk. And network, and anything I/O-based. In UNIX land, you have read() and write(). These calls translate to system calls, and for most cases, a driver is called upon to process the I/O request. The driver starts the request, and then after some time has passed and the underlying hardware has completed the I/O, "completes" the request, which bubbles back to the user via the return of the read() or write() call.

Now, with NT, you have ReadFileEx() and WriteFileEx(). However, a user can call them in such a way such that the semantics are: "hey, try and read this, if you can do it immediately without blocking, great... if you have to block, then do whatever you need to do in the background to make that happen, but still return to me without blocking".

That, and that alone, is the key difference between the inherently synchronous I/O model of UNIX, and the inherently asynchronous I/O model of NT. The entire NT I/O subsystem, cache manager, driver API, memory management, APCs, scheduling et al is predicated around the notion of every I/O request being asynchronous.

If everything happens to be in the right spot at the right time, sometimes an I/O call can be synchronous (i.e. user->kernel->user without a context switch due to a required wait). In every other case, the kernel won't be able to complete it there and then, so, it checks to see if the user still wants that read or write call to return immediately -- which implies "asynchronous I/O" (referred to as "overlapped" I/O in NT parlance, because you're overlapping an I/O request with more compute).

Windows kernel drivers are fundamentally more complex than corresponding Linux drivers because the kernel's I/O model is fundamentally more sophisticated -- everything is packet driven (the "I/O request packet", or Irp), your driver's read/write entry points need to be able to query the incoming I/O request and determine if the user wants sync/async, how you need to return the call so that the I/O manager can furnish the correct behavior to all the other pieces of the subsystem (and potentially other drivers that are layered higher and lower), and a huge number of other subtle details.

The added complexity is required because the fundamental I/O model is asynchronous. In the UNIX synchronous I/O model, there's simply no semantic concept -- at both the driver level, kernel level, and APIs exposed to the user -- to say "here, read() this and return immediately -- if it can be done synchronously, great, if not, kick it off in the background and give me some opaque structure back I can use in the future to check on the completion of the operation".

The other huge advantage of NT is the notion of thread-agnostic I/O. That is, the thread that initiates one of these asynchronous read requests doesn't have to be the same thread that completes it. Although it sounds simple, that's one of those tip-of-the-iceberg technical things where there are so many pieces behind the scenes that need to cooperate to facilitate the functionality. I talk a little bit about thread-agnostic I/O here: https://speakerdeck.com/trent/pyparallel-how-we-removed-the-....

So, to summarize, all discussions regarding asynchronous I/O and M:N threading on UNIX are sort of fundamentally flawed because the underlying primitives can't express what is actually needed (an asynchronous I/O subsystem at the kernel level, thread-agnostic completion-oriented I/O, and ideally, thread pools + completion ports) to achieve the end goal: optimally using your underlying hardware :-)

(Optimal hardware usage necessitates one thread running per core, and the ability for any one of these threads to continue program logic upon completion of an I/O request, regardless of whether or not they were the thread to initiate that request.)


I'm glad Perl6 finally went the route of MoarVM. I always felt that ParrotVM (which was supposed to also run Python and Ruby) was a terrible distraction and scope creep. The idea was laudable but it would have required buy-in from the other communities. As much as I like Perl, I don't think that would have been fair to them, as Python and Ruby have evolved into their own respective identities.


> The idea was laudable but it would have required buy-in from the other communities

But imagine how cool it would have been - write a class in Python, sub-class it from Ruby, use that subclass in a function written in Perl, call that function from Lua code...

Okay, I see it now. It was just too awesome for its time. Maybe in a hundred years or so...


JVM is now becoming what Parrot was intended to be with Graal/Truffle.


I guess so.

But without Perl 5 and Perl 6 it is just not going to be the same. (Pouty face)


Step 1. Fix Rakudo on the JVM

Step 2. Fix the perl5-in-perl6 work

Step 3. Retreat to a corner and sob until you recover some SAN points


Best argument to use Perl 6 is that it's fun. Like really fun. Whatever language you are using is boring. Perl 6 is just damn fun. It's like your favorite language but with dollar signs, and funner.


Perl5 is also fun. Until you have to work with others' people code, or even your own older code.


Many P5 folk claim that it remains fun if the P5 code is well written.

But regardless, having fun with P6, which is specifically designed to be readable, maintainable, composable, refactorable, etc. is fundamentally different from having fun with a language like P5.


Perl 6 is indeed fun - it's even optimised for it [1]!

I'm enjoying using it for command-line tools and web applications - it's expressive and scales with the problem space.

1. https://perl6advent.wordpress.com/2015/12/20/perl-6-christma...


Can anyone recommend a good book on Perl 6? Are there any (even bad ones)?

Right now I feel the major reason that keeps me from investing time in Perl 6 - besides adoption by distros - is the lack of a good book, like the Lama and the Camel book for Perl 5. It's kind of frustrating after having waited so long.


This one is 'in progress' https://www.learningperl6.com/

Looking at the Perl 6 website, there are a few more coming along as well. https://perl6.org/resources/


... but nothing from Larry Wall. I don't get it. Perl 6 without a "Programming Perl 6" (O'Reilly or Pragmatic) will just delay adoption.


Nice, thank you very much!


https://perl6book.com/ gives an overview of Perl 6 books, and how they differ. Enjoy!


> adoption by distros

This has improved, and I think most distros now ship Rakudo (Perl6 on MoarVM). At least: Debian, Ubuntu, Fedora, and Arch do.


I just checked on my desktop (openSUSE Tubmleweed) - it has both MoarVM and Rakudo. Fun times lie ahead! :-)


CentOS 7 has a package for it if you enable the EPEL repo.



Think Perl6 is already out and free to download...very well written!


Perl 6 is the spec tests, and is open-source

Rakudo is the only currently supported implementation (ignoring several specialized partial implementations), it is also open-source.


To all who answered: Thank you very, very much!!!


Perl6 is indeed a very nice language that I've been watching for a few years. As soon as the performance beats Python and the stability is solid I'll probably switch over from Python...there's just a lot there.


I have seen examples where the naive Perl 6 implementation is faster than the equivalent C code. (The one I'm thinking of is probably owing to Spesh and the JIT, but I forget what it was about)

Here's an example where a naive Perl 6 version is going to beat the pants off of the naive version in another language:

Find the sum of all integers from 10 to 1000000 inclusive.

In Perl 6:

    say [+] 10 .. 1000000
the [+] is a left fold using the &infix:«+» operator (&infix:«+» is left associative)

This finishes almost immediately. (It also does so if the endpoint is 10¹⁰⁰⁰ or more)

The reason is if you use the built in version of the &infix:«+» operator, it calls the sum method on the Range object. That particular method knows how to calculate the sum on the Range without iterating through the values. If you modify the &infix:«+» operator lexically it won't take that shortcut.

This also works the same if you wrote it like this:

    say Range.new(10,1000000).reduce(&[+])
(There is syntax to omit either or both endpoints with both ways to create a Range)

There are plenty of areas where Rakudo Perl 6 is slower than I would like, but for many uses it is fast enough.


That is pretty awesome. When I'm talking about performance and stability, I'm just saying we're still seeing a lot of low hanging fruit being taken care of. For example, Liz is still beast-moding away each week at enhancing the performance of lists, sets, & bags by 100-1000x in some cases. At some point, nearly all of those kinds of fixes should be already fixed. At some point in the future, it'll start getting used over Python I think.


That's pretty cool! I started trying to learn perl6 and write my first interesting program. I have a copy of 'ray tracing in a weekend' and am trying to translate the basic ray tracer from c++ to perl6.

It's remarkably slow right now.. It takes 1.5 minutes to generate an image. I tried to do the same thing in Julia and it takes 1.5 seconds. So I guess I must be doing something wrong!


Probably not wrong but quite possibly in a way that's not been optimised well. I'm sure people on Freenode IRC #perl6 would be interested in your work. Don't forget `perl6 --profile script.p6` gives nice profiler output out of the box!


Maybe it'd be fun to hack on https://github.com/raydiak/pray?


On a sidenote

> or Nginx or Node.js without the callback cacciatore.

What's a cacciatore in this context? It means hunter in Italian. Probably something messy if it means the same as callback hell.


I read it as a short version of chicken cacciatore, which is a dish with an appearance that may look messy.


That makes sense I suppose, it's a cacciatore made with callbacks instead of chicken.


cacciatore is a style of rustic stew in Italian cooking.


I would have bet that cacciatore was about food. To be pedantic, that's "alla cacciatora" but I won't be picky with Italian food terms outside Italy, it's good advertising anyway (I'm an Italian in Italy.)


> To be pedantic, that's "alla cacciatora" but I won't be picky with Italian food terms outside Italy

It's good that you won't be picky, since you're arguing against decades of "cacciatore" for Italian Americans. ;)


"alla cacciatora" is the name of the style in Italian, that somehow got mangled in english.

Kinda like "gabagool" which is effectively an american only name for an italian product we call "capocollo".

pmontra is italian, hence the confusion.


Great quote on perl5...

In #devops is turtle all way down but at bottom is perl script. - @devopsborat


I like how the original was agnostic, yet you interpreted as perl5 :-)


I think it's common knowledge that perl6 is basically rare in deployment, whereas perl5 scripts are everywhere in legacy systems. Former CPAN author myself.


why is everybody saying they are put off by Go? Did we pass the "trend" phase and now it's cool to say go sucks?


Tongue-in-cheek answer: lol no generics :)

To be serious, in my circle of developer-friends, we feel that it tries to be TOO simple, and disagree with some of the decisions of how the language works (ex. seemingly endless `if err != nil { return nil, err }` type things)


I always find it a bit funny when dev's try to put a label of "too simple" on a language.

How would your friends label "brainfuck" ? Too simple because it has very limited number of identifiers or Too hard because accomplishing anything is nightmare ?


Simple is not the opposite of hard. The opposites are complex and easy.


For me, other than the fact that I don't need yet another Algol reimplementation in my life, I'm also not interested in code littered with error handling. Also strongly prefer FP.


I don't understand how some other language would not need to handle all the errors ?

I have written code in go/java/php/python/js and error handling has been a majority chunk of lines in most if not all cases(in other cases the errors are just not handled). The best ideal case flow is always easiest to build.


I highly recommend that you try out a language with good support for option/maybe types: Haskell and Rust are good options here. Very often, your error handling through a chain of operations will be to use a slightly different operator (say '?.' Instead of '.') and any errors will be automatically propagated to the end of that section of code where you can handle them all in one place.

Alternatively, give Erlang or Elixir a shot: their primary mode of error handling is "don't, let it crash", which is possible due to the relatively unique process model.


> I highly recommend that you try out a language with good support for option/maybe types

Right. Like Perl 6.

> Very often, your error handling through a chain of operations will be to use a slightly different operator (say '?.' Instead of '.') and any errors will be automatically propagated to the end of that section of code where you can handle them all in one place.

P6 does this stuff particularly well.

It makes option types opt out. You have to explicitly specify the equivalents of Just or None, otherwise you're automatically dealing with an option type if you're dealing with a type.

Conditional constructs are designed to work well in this context.

One can use ML style pattern matching as those conditional constructs.

And other styles of matching.

Error exceptions are unified with error values, allowing codebase/author A to use one style, and B to use the other but pretend that A's code used B's style, and for C to use B's code in whichever style they prefer, seamlessly promoting/demoting between warnings, errors, and exceptions, including across language boundaries, as fits a use case.

And so on. Perl 6 learned from Perl 5 but it also learned from Haskell and many other languages.

> Alternatively, give Erlang or Elixir a shot

Note that the author of the OP is well known in the Erlang community.


How is "opt-out option types" different from, say, Java, where everything is nullable and all the problems that come with that?

Honest question, as I haven't had a chance to look into P6 as much as I would like. It looks pretty cool in a lot of ways, but I'm worried that all of the options available would create a confusing mess.

> Note that the author of the OP is well known in the Erlang community.

Yeah, my comment was directed at the parent who asked:

> I don't understand how some other language would not need to handle all the errors ?

And was intending to give a brief description of how other languages (possibly including P6, though I don't have the experience to say) handle errors differently from the languages they listed (go/java/php/python/js).


> How is "opt-out option types" different from, say, Java, where everything is nullable and all the problems that come with that?

Aiui the Java compiler and language constructs generally don't know the difference.

The P6 Nothing isn't a null, it's a "type object" with memory- and type- safe behavior.

There's a tiny bit more about this at: https://en.wikipedia.org/wiki/Option_type#Perl_6


> I don't understand how some other language would not need to handle all the errors ?

Erlang and it's VM doesn't need error handling (edit: like the ones you listed).

You use pattern match and you match for the outcome you want. So every other state is a let it crash and burn cause you don't care. Because of this mentality you don't have to be God and figure out all the possible fail state to Error check.

The VM enable this type of thinking because it's preemptive. And also there are supervisor trees to resent your program state. The error check is in form of monitor and robustness I guess but it's not in the usual form of other popular languages like the ones you listed.

This is why I prefer Erlang over Go in concurrency. The BEAM VM is good. Also no generics.


You've not yet tried Erlang.


I don't know about others, but I personally have always seen Go as an internal language for Google. This made me feel like it was appropriate for projects that I personally control, but unlikely to be useful in a career. That's really just a feeling, though, and I recognize that if everyone felt like I do, it would make Go a niche language that was actually probably pretty lucrative to know. So I wouldn't say I'm put off by it, so much as the balance of factors hasn't tipped in favor of learning it yet.


I can't speak for your location but I've found no shortage of companies choosing Go in London; from fintec through to online games. So it's definitely not a niche language used only by Google.

I can understand other people's complaints about the language though.


Like I said, it's the belief that it's a niche language that's kind of making it a niche language - that is, fewer people know it (because of the perceived niche-ness), but more people want it, meaning the pay is good. I'm not at all surprised to hear there are quite a few shops using it.

Granted, in my area, there are exactly none, but I don't live in the best location for the use of modern tool/language/technology stacks. If I were to move an hour or two north, I'd probably find many.


I'm not really sure what the point is you're trying to make.

* Are you saying you don't use it because you consider it niche?

* Or are you saying you believe people don't use it because they believe it is niche?

* Or is your point that Go is niche because people consider it to be niche so it never gains critical mass?

In any case, it's not niche. Period. It's being used lots by both start ups and more established businesses alike. I'm not saying it has the kind of penetration that Java or Python does, but Go is still definitely "mainstream" (for want a better description) these days.


Was waiting for it to mature. Things like Generics and package managenent seems to not only be forgotten initially but actually avoided.


For me Go is only relevant to the extent I might have to deal with Docker and K8s related infrastructure code.


Go was no-go for me from the get-go (hmm I should probably get better phrases but F it). Its design goals just don't align with my preferences at all.


The design go-als, you mean.


It doesn't suck, but personally I prefer languages that offer more abstractions than less, because I like writing less code. And I prefer exceptions to having to explicitly handle very potential error.

Go to me feels like Java did back in the 90s (in the sense that Java only offered some of the power of a language like Smalltalk while making you jump through hoops, and thus having to write more boiler plate).


When a language is too popular it's being hated by HN / Reddit. If you want to stay cool you have to use a niche language.


no exceptions

no generics


lol no generics


Can Perl6 merge all of its generated VM code into one "package"? One of my favourite things about Go is that you can statically compile everything and then deployment basically becomes scp.


I find it somewhat humorous that more than 25 years on, we are effectively reverting to static linking and bundling [1].

Way ... way back in the day, the argument was it would save memory, encourage reuse, etc. While some of this may have been true, it also gave rise to dynamic library hell. Reuse was overshadowed by incompatible versions, or API changes between versions that became the stuff of legends.

This would make a great study in how the ROI was completely overshadowed by risk considerations not considered at the outset.

Of course, saving memory was a huge issue in the days when 1GB RAM was considered gigantic. So maybe the cost was worth it.

[1] https://scalability.org/2017/03/what-is-old-is-new-again/


> Reuse was overshadowed by incompatible versions, or API changes between versions that became the stuff of legends.

The Perl community has spent a couple decades "enjoying" taking (or being forced to take) a long hard look at that.

One result in P6 is :auth (authority), :api (api version) and :ver (package version) qualifiers, eg:

    use Some::Module:ver<1.2>:api<3.4>;
with corresponding support throughout the language and packaging tools.

Aiui this aspect of the language and tooling needs work but shows promise.


> Way ... way back in the day, the argument was it would save memory ... it also gave rise to dynamic library hell.

Could you expand a bit on how static linking would cause either of those things? It seems to me that they are both consequences of dynamic linking.


I think "the argument" being referred to was the push for dynamic linking, and the consequences in question are for dynamic linking. I think there was just a somewhat unclear implicit shift in the subject.


...and then deployment basically becomes scp.

And security updates for your deployment become...what?


Rebuild and scp the whole lot again? Not necessarily a big deal.


Rebuilding by pulling in all your dependencies from the Internet again? That's not something I want to do when the clock is ticking to get a security update out. What if some random third party dependency is hosted somewhere that is down, or has been pulled?

And how do I know I have to rebuild in the first place?

You'll probably tell me that there are tools to help deal with this. I'm sure there are. My point is that it isn't as simple as "scp".


Or doing patches. Copy the file locally... apply the patch... restart process.


My personal issue with Perl is how much of a mess the syntax is (10 different ways of doing the same thing), and the lack of a standard library (CPAN is not a viable replacement).


I'll take 10 different ways to do the same thing with different approaches over one "true" way with no flexibility any day.

Perl gives the programmer a lot of rope to work with. Whether one ties that rope into a drawbridge or a noose is the programmer's choice.

Also, CPAN is fantastic and widely supported. I really wish more languages would follow its example, especially when it comes to package namespaces.


Why is CPAN not a viable replacement? It has always been perfectly viable for me.


CPAN packages are written by thousands of different people with varying styles (cf. the issue with 10 different ways of doing the same thing), quality standards and maintenance schedules.

In Python I can expect the standard library to be consistent and be maintained as part of the core language. Now I don't mean that everything has to be in the standard library (I'm fine with the database access libraries being third-party for example), but in Perl even the most basic stuff like exceptions is only on CPAN and there are again multiple packages to choose from..


Unfortunately for Python is that many/most of the modules that are included were written when the language was fairly new, so at a time when nobody was an expert in the language.

Which means that perhaps for many of them there is a better module out in the ecosystem that almost nobody uses because they have one extra step to install them.

If you want something close to that in Perl 6, just install Inline::Python and you can use all of those “batteries included” with Python as if they were modules written in Perl 6.

Exceptions in Perl 6 are much easier to deal with than they are in Perl 5.


> 10 different ways of doing the same thing

Mostly an urban myth. Where it isn't, at least 8 of those ways are actually important.

Exercise: write a python program that prints Python's quoting documentation, without external files, without editing the documentation. Spoiler alert: it's impossible.


So, how exactly these different ways of getting sub arguments are useful?

    sub add1 {
        my ($arg1, $arg2) = @_;
        $arg1 + $arg2;
    }

    sub add2 {
        my $arg1 = shift;
        # possibly two screens of dense code here, then
        my $arg2 = shift;
        $arg1 + $arg2;
    }

    sub add3 {
        $_[0] + $_[1]
    }

    # There may be some other ways to extract arguments 
    #  that I am not aware of.
    # It's possible to combine any of the methods!
And this is just argument access, one of the core language primitives.


> So, how exactly these different ways of getting sub arguments are useful?

Because in the second (add2), you might do something interesting with the second (or third) arguments depending on the first. You might have optional arguments. Your arguments might be a list of items you don't know the size of. It allows you to develop what you need out of the extensible core mechanism. Modules can and have built on that.

> # There may be some other ways to extract arguments

There are. Now you can just do

    sub add( $arg1, $arg2 ) {
        $arg1 + $arg2;
    }
My personal favorite is to use Function::Parameters[1]

    fun add( Num $arg1, Num $arg2 ) {
        $arg1 + $arg2;
    }
which allows named params, type constraints (runtime), default params, etc.

1: https://metacpan.org/pod/Function::Parameters


I cannot imagine any way of practical (ab)using this flexibility that could not be covered in Python with its only way of handling arguments.


To be a bit pedantic, these are not different ways to get a sub arguments, but accessing an array in different ways ("default array" in this case). If I understand this correctly, your criticism is aimed at how the core language makes arguments available to a sub body, in the default array.

The first example uses the explicitly named default array @_. This is a common pattern, and easy to read.

The second one omits the "default". Note that whilst it is possible to write some dense code between the $arg1 and $arg2, it won't work as expected if the dense code bit has array access - ie., the default array can change in the code.

The third example uses the typical sigils for accessing individual elements within an array (default or otherwise).

I wish the core language had saner defaults, but over time, I've seen some reasonable uses for the different styles:

1. General subs, the same as you example

2. shift removes the first element of the array and returns it. This can make the code more readable in certain cases:

  use Params::Validate 'validate';
  sub add2 {
    my $self = shift;
    my $args = validate (@_, { ... } ); 
  }
3. Slightly less verbose code, for simple one-liners and/or anonymous functions:

  my $calc_functions = {
    'add' => sub { $_[0] + $_[1] },
    'subtract' => sub { $_[0] - $_[1] },
    ...
  };
  my $func = 'add';
  $calc_functions->{$func}->($args);


The significant part of argument handling in Perl5 is that @_ is a list of aliases to the arguments. You usually see add1 or add2 style argument processing because nice named arguments that are a copies of the function parameters.

add3 shows how to access the aliased arguments directly.

  # break the aliasing of @_ and make undef into 0.
  sub undef_to_zero { @_ = map { $_ // 0 } @_; }

  sub math_stuff {
    # call with identical @_
    &undef_to_zero;
    # Do math without undef warnings.
    my $sum = 0;
    $sum += $_ for @_;
  }

  # Alter all arguments in calls by repeating them
  sub fatten {
    $_ = "$_" x 2 for @_;
  }

  $foo = "abc";
  @bar = ( 1, 2, 4 );
  foo($foo, @bar);
  say $foo; # abcabc
  say for @bar # 11
               # 22
               # 44


You aren't really showing off different ways to get arguments in any of those examples. In all of them, you get the arguments in a list called @_; you're just showing off three different ways to get values out of a list, which Python has plenty of ways to do as well. Translating your examples into Python:

    def add1(*_):
        _ = list(_)

        arg1, arg2 = _
        return arg1 + arg2

    def add2(*_):
        _ = list(_)

        arg1 = _.pop(0)
        arg2 = _.pop(0)
        return arg1 + arg2

    def add3(*_):
        _ = list(_)

        return _[0] + _[1]


That does not change the fact that in Python everybody writes

    def add(x, y):
       return x + y
whereas the Perl codebase that I work with has every possible combination of these methods.


But people using Python do at different times use destructuring, pop, and indexing to get values out of a list, which are the different ways of doing the same thing that you demonstrated.

e: Just clarifying that your complaint here is that every sub in Perl 5 just receives a list of its arguments; this isn't a "more than one way to do it" issue.


Why does Python have other options if no one is using them? Are these other options cruft in the language design, the compiler, or just reasonable support for features intended for other, perhaps infrequent, usage?

The old P5 motto TIMTOWTDI was long ago updated to TIMTOWTDIBSCINABTE and P6 adopts the latter. While P6 supports most of the options shown for P5, most folk writing P6 will just write something like:

    sub add($x, $y) { $x + $y }


I don't mean to detract from your point, but I have a related question:

How much actual cognitive overhead is there associated with three of four different ways of unpacking args?

Speaking for myself (as a long-time Perl/Python/GoLang/etc programmer), when I jump into a language I don't know, it doesn't take me long to get 'muscle memory' for how such things work.


Come up with a style guide and stick to it. Failure to do that with any language, no matter how restrictive, leads to messy code.

It doesn't matter which way you unpack your args unless you are trying to do something fancy (the sub 1% cases). Just pick one.

Hell, it's easy enough to make a script to rewrite multiple shifts into destructured list assignment or vice versa.


The second example is just bad code. The third example one would rarely use in oneliners or very shorts subs, maybe in a dispatch table. You can write atrocious code in just about any language. Perl 5.20 finally has function signatures anyway.


you'll be happy to know that the sigils (@) don't change in perl6 for access.


I see no problem here.


I'm not sure I understand your exercise. What do you mean by "Python's quoting documentation", and what do you mean by "editing" it?


> What do you mean by "Python's quoting documentation"

String literals are described by the following lexical definitions:

   stringliteral:   shortstring | longstring
   shortstring:     "'" shortstringitem* "'" | '"' shortstringitem* '"'
   longstring:      "'''" longstringitem* "'''" | '"""' longstringitem* '"""'
   shortstringitem: shortstringchar | escapeseq
   longstringitem:  longstringchar | escapeseq
   shortstringchar: <any ASCII character except "\" or newline or the quote>
   longstringchar:  <any ASCII character except "\">
   escapeseq:       "\" <any ASCII character>

You can't paste that text into a file and write a Python program around it that outputs the same text. You need to encode the quote characters.

More generally, there is no way in (many languages including) Python to treat an arbitrary (yet known) block of data as data.


For those that don't know you can use `「` and `」` to quote anything that does not contain `」` and Perl 6 won't process it in any way other than storing it as a Str.

If you needed to have `」` in there you can fall back to `Q{{{{{{…}}}}}}` or `Q:to<END>;␤…␤END␤`


I was wondering about that, too. Unless `"Python's quoting documentation"` is an easter-egg in "help()" that recursively contains a quoted version of itself, it should be possible to just escape all problematic characters. But maybe that counts as `"editing"`? Not sure.


> maybe that counts as `"editing"`

how would it not


Even though this could more properly be called, "Why having language support for M:N thread multiplexing is important", I thought it was a refreshing article on why I might actually use a bit of perl6.

Last year I attended a conference and saw Larry Wall speak. It was an overview of Perl 6 and I was completely underwhelmed. Larry spent about half the time talking about unicode support. It wasn't a boring talk, but I never felt a moment when I said, "Awesome! This is a pain point in some other languages and I would pick up perl6 if this ever happened again."

I don't want to write perl6 completely off, but I have found that perl6 advocates have not done a great job on why you would actually want to use it. It's hard to justify learning "different" syntax just because someone says, "hey, it's fun!".


I had saw an old coworker at a Python meetup group session the other week and I started to remember he and others had to support all these Perl scripts. Since leaving that place and seeing so much of Python everywhere I thought the other day where Perl wound up in the mix of things several years later. Seems like the community kept going.


Better math is one good reason. Grammars are also a great feature of Perl 6.


I've never faced a problem in another modern language where math became a big problem. Even using something like numpy made never made me frustrated with, say, python's numerical types.

From what I've seen perl6 makes some operations more convenient. Cool, I guess, but not a great reason to reach for perl6.

Grammars are really cool, and one reason I might reach for perl6, but you also have really great special-purpose tools like Antlr. I suppose I don't feel the need grammars that often to need them reified in my language of choice, but next time I need to do some parsing, however, I will probably play around with perl6.


I should have been more specific. Try this in Ruby, Python, PHP or Javascript: 0.1 + 0.2 - 0.3 . You'll get an approximation in most languages but only Perl 6 returns 0.


Yes, I understand the example. It's a classic floating point numbers demonstration.

What I am saying is that I have not encountered a real-life case where this would have made my life easier.

Again, it's neat, but not a real reason to move to a new language.


Banking/accounting software? Generating reports that are expected to return correct data w/o cumulative errors from flating point arithmetic? After all, Perl is the Practical Extraction and Report Language.


I'm not saying there aren't applications where it is useful. If you can't tolerate errors, then you typically don't use floats in the first place. We've been running banking/accounting software for a long time without Perl 6.

What I am saying is that this isn't a killer feature.

It makes me go, "Oh, neat", and not, "Wow, this will make my life so much easier".


Then I have to assume you're not working in any kind of commercial or financial sector where approximations are not acceptable. Perl 6 and Go should be lauded for solving this problem.


Actually, I work in analytics well inside the commercial sector. 10 digits of accuracy works most of the time, and when it doesn't there are other alternatives. As with anything there are trade-offs.

I wish more languages would do this and I happily applaud Perl 6 and go for allowing sane operations in the real space without needing separate packages.

But there are high quality packages for all modern languages that allow you to get precision if you need it.

I'm not debating the utility of this feature, but I am saying it's not--by itself--going to make me want to pick up Perl 6 for my next project.



Extending Evan's analogy, perhaps trying to explain the joke ruins the joke?


thanks for touching the heart of an old perlmonk :)


It's pretty cool to see Perl adding this feature. My rubric for the three big scripting languages (once upon a time?) was:

Perl: Good at regular expressions, and proper per-processor core threading, Bad because what you wrote will be indecipherable to future you

Python: Very expressive (without being too much), lots of library support, good drop-to-c/do-stuff-fast support, sensible support for object and functional programming paradigms that evolved over time, super useful stdlib. Bad because GIL

Ruby: ???? at least it's beautiful? seems to only be used in codebases that are rails-based which is basically a red flag for me now due to personal preference

It looks like Perl just got another killer feature


Perl 5's base object system was based on Python's

Ruby has been referred to as Perl 5 with a better object system.

Perl 5 also has Moose an object system based on Perl 6's which has been ported to Python and to Ruby twice.

The way to allow calling into C with NativeCall is almost seamless most of the time. It has also been used to make using modules/libraries from other languages even more seamless. (usually the only way you know the module is from a different language is the :from<Python> in the use directive.)


Thing that really gets me about perl is the syntax, it's horrendous.

No other reason, that's all. Just the syntax.


It's _different_. But working for a few years with Perl 5, it really grew on me. It could use a good "Perl: The Good Parts" book, I guess.

I haven't looked at Perl 6 yet, don't know if that's better or worse. Too many other things to do, and I don't think I'll ever work with Perl professionally again, that ship has probably sailed.


Higher Order Perl might serve for that. (I haven't really read any other Perl books.)


And you can read it for nothing, if you are interested:

http://hop.perl.plover.com/book/pdf/HigherOrderPerl.pdf


The Perl 5 equivalent is Damian Conway's Perl Best Practices


PBP is very good, but pretty desperately needs an update. There might end up being a new version for Perl 6, but I doubt it will ever get one for Perl 5 and that's unfortunate.

Several pretty big recommendations (particularly regarding OOP in Perl) from PBP aren't really regarded as having stood the test of time very well.


Blame Aho, Weinberger, and Kernighan. Oh and Larry :)


The good thing about awk is that it's limited. So you know when to stop using it and move to use a different language when your requirements exceed the comfort zone of awk.

Larry Wall came and made a language inspired by awk that doesn't give feedback on when you're out of the comfort zone. This lets you make a mess before you know what you've done (aka technical debt). The same problem 'that killed Smalltalk and could have killed ruby' too (thanks, Bob Martin).


Why are the awk guys to blame for Perl syntax?


> (It's sorta like sed, but not. It's sorta like awk, but not. etc.)

Guilty as charged. Perl is happily ugly, and happily derivative.

    Usenet article <1992Aug26.184221.29627@netlabs.com> (1992) -- Larry Wall


I don't see the problem here... those are good tools and Perl have those tools power in the core language++ Good stuff if you ask me.


Does this not help with handling this issue in python or is this considered a hack? http://effbot.org/zone/thread-synchronization.htm


I like writing bash scripts a lot and perl is a natural step-up with powerful libraries to use. As others have said, the major obstacle for me has been the lack of great literature.


There's lots of literature out there for Perl 5 (like the so-called "Camel Book"). Perl 6 literature is pretty scarce, though.


There is 2 out now and many in development. So really not that bad.



No idea the other ones came out so fast :)


For small tasks I enjoy aesthetics of small dedicated Unix tools (grep, sed, awk, cut, sort) working together much more.

Perl tries to be everything at once, still mysteriously lacking interactive REPL out of the box. Yes, Python is more heavyweight for scripting, but it pays in the long term with better maintenance and tooling for big scripts.


So do you like small tools or having everything built in (like a REPL)?

It sounds like you are justifying an emotional attachment rather than making consistent arguments.

Now, there is nothing wrong with having an emotional attachment to a language or other tool, as long as you know it for what it is.

Python is a great language and has some really nice features, it's C extension system is miles better than Perl 5's horrible mess called XS. But to me, using it feels like wearing my shoes on the wrong feet, I really prefer Perl 5.

From my experience and from listening to colleagues, Python has one edge on Perl 5. Perl 5 and Python have very similar tooling abilities. Long term maintenance of large systems is pretty much the same. Python has the edge in ability to find and hire qualified people because the community has done a better job articulating Python's strengths and recruiting new people.


The first version of Perl was basically combining many Unix tools together into one language, along with features from C, so that they are easier to use together.

In fact there are two flip-flop operators that exist in Perl 5, one which works like the one in awk `..` , and the other like the one in sed `...`. Perl 6 renames them to `ff` and `fff`.

I remember hearing/reading that the release of Perl was delayed until a2p and s2p were written, specifically for people who used awk and sed to be able to get going quickly in Perl.


Perl 6 carries 2 pieces of baggage from its predecessor - prefixing variables with "my" to create lexical scope and the obligatory "use v6;" at the top of every script. Why can't an advanced language like Perl 6 scope variables without littering "my" everywhere? You don't see it in any other mainstream language I can think of.


> prefixing variables with "my" to create lexical scope

Some people (such as me), prefer designating variable scope and easily identifying where a variable was defined. By the way, that isn't required by Perl 5. It was decided by the community that it was beneficial in that it solved far more problems debugging than the small bit of extra effort required, so the common refrain to "use strict;" was adopted because with that pragma it then is required.

> Why can't an advanced language like Perl 6 scope variables without littering "my" everywhere?

It could, but the community has decided it's not really beneficial. Have you actually examined why you think it's better?

> obligatory "use v6;" at the top of every script

That's not obligatory. You can omit at your leisure. If you do in include it and your script is accidentally called from a Perl 5 interpreter, you will either get an error, or if supported it will try to load a Perl 6 interpreter to parse the code.


prefixing variables with "my" to create lexical scope

I strongly disagree: From a languge design perspective, block-level lexical scoping with explicit declarations is superior to all alternatives I'm aware of.


How is this different than prefixing variables with var in JavaScript?


Just to be pedantic ;)

var creates variables that are function scoped. my creates variables that are block scoped.

let in JavaScript would be pretty much the same as my.

Both are good things as they prevent typos from being new variables.


> Guess what it outputs? Nothing! Just kidding, you’ll see a lot of distressed messages from tasks that passed out wait for the phone to ring, and woke up wearing someone else’s OS thread.

Well, as a matter of fact I really got no output at all :(


> Actually, the program will output nothing if you’re using a release of Perl 6 prior to Rakudo Star 2017.07; a caching bug previously caused $*THREAD.id to report an out-of-date value.


Needs a correction re: node.js. It does have await and does switch context now.


This would've been so much easier if they just called Perl 6 something else. Would've saved them so much aggravation.



> Concurrency is hard and if you want M:N thread multiplexing (i.e. WEB SCALE CODE, where application threads aren’t pinned to pthreads) your options today are precisely Erlang, Go, .NET, and Perl 6

Why mention .NET here? Async code in .NET works with a dispatcher thread (-pool) under the hood, which is nothing like the greenish threads that the other systems offer. Am I missing something?


TPL, tasks are the greenish threads you mention.

You can also just call Windows fibers directly, if feeling adventurous.


I mean core async in Clojure works the same, would you say it's the same?

It might not be under the hood, but as perceived to the programmer it is?


I'll stick with Tcl if I'm going to use a glue language that has green thread like functionality and an event loop. early choices were tcl or perl: went Tcl and never looked back.

As far as web development in perl..well have fun convincing everyone on the node.js and python bandwagons to move 'back' to perl. Glad it works for you.


I've done some neat things with Tcl and Python.

Python comes with a C module for interacting with an embedded Tcl interpreter (Meaning Tcl and Python interpreters are running within the same process). It's mainly used for GUI programming from Python but the interface is generic enough to be able to write some nice iterop code.

For example, I have a library that seamlessly lets you use Python classes and objects from Tcl and vice versa.

From there we merged the Tcl and Qt event loops so that we could have PyQt widgets playing nicely with Tcl/Tk widgets.

This was all part of an effort to migrate from a Tcl/Tk codebase to Python/PyQt.

I wish I could share the code because it's kind of neat. Only around 600 lines of Python code to do all that magic.


I hadn't heard of Tcl before but it looks awesome! What do you find its ideal use cases to be?


TCL is pretty old school and doesn't get a lot of attention nowadays. The main use I'm aware of is in BigIP F5 config files. There are some really interesting things about it though, for example how control flow constructs (if/else, while) are implemented as commands, using built in uplevel and upvar[1] commands to control the scope of the currently executing code. Some people say it's lisp-like, that's one of the things they mean.

TCL has an interesting history, it's creator is also known for Ousthout's Dichotomy[2] which is useful when disambiguating between "scripting" and "other" languages (an argument that occurs frequently on the internet).

Antirez has written about TCL, if you're an admirer of him or his work (redis) you may find his take on it interesting[3].

[1] https://en.wikipedia.org/wiki/Tcl#Uplevel

[2] https://en.wikipedia.org/wiki/Ousterhout%27s_dichotomy

[3] http://antirez.com/articoli/tclmisunderstood.html


I wrote a blog post where I show how to define "until" using Uplevel: http://christopherchase.cc/posts/tcl-until.html


upvar and uplevel are also the only parts of the language that I do not use in production scripting. They have their place. It is not in end user space.


Tcl is pretty old. A lot of the territory it covered well is now better covered by other things.

One of its great strengths was being extensible. It's pretty easy to bake some functionality into Tcl and provide a new shell with it. This is what expect (http://expect.sourceforge.net/) is, for instance.

Another early advantage of Tcl was Tk, which provided Tcl extended with GUI stuff. Tk was cross-platform and established very early on, so if you wanted to provide one GUI for Unix, Mac and Windows, it was a good choice. It was very ugly for a long time; I hear it has gotten better lately but I don't know that myself. Tk is still built-in to Python as Tkinter. This is another specialty shell provided by Tcl, wish ("window shell").

Tcl was supposed to be good for embedding. The original author was supposedly using it for robotics. I haven't seen much energy there; I think Tcl's lunch probably got eaten by Lua some time ago. Lua is smaller, more like a modern programming language and easier to embed, but Tcl is more like shell scripting, so I guess there might be style reasons why you might prefer Tcl despite Lua's advantages.

Tcl is easy to disparage, but there's a lot of misinformation out there. At one time, it really only had one data type (strings). Quotation is a big topic in Tcl, but like Lisp, you can supply your own control structures. Everything is a command and commands have a lot of freedom about how to interpret their arguments. It is rather Lispy, unlike Lua. Tcl does have modern data structures and it does pretty snazzy things behind the scenes to optimize access to them while making things still seem to be strings for the cases where that matters.

Tcl is still around in some surprising places. I wouldn't expect there being a lot of new development starting with Tcl but it's kind of a fun language to know about. I got the newest edition of the book and it's a neat book, but I confess I haven't found ideal use cases for it.


Old is relative. There is a good core dev community and recent development is promising. It's interesting you mention Lua because that is a language I picked up when looking to move from Tcl at some point in the last 8 years. I decided rather quickly against that.

The reason is the swig bindings for Tcl are sufficient to generate decent package cores for just about anything I care to extend and I can optimize later.


Old is relative, but there aren't many alternatives to Tcl that are older.


Depending on your exact use case, Forth might be one


Good point.


> "It [Tcl] is rather lispy..."

Yes it is, and that's a good thing. Over the years Tcl has become increasingly Scheme-like in some respects. In fact without too much trouble I've ported some programs I'd written in Scheme to Tcl because of Tcl's portability and easy access to GUI when needed. OTOH adapting some code from Javascript to Tcl was harder.

Tcl is very useful for utilities. Library support is enormous so there aren't a whole lot of things that are impossible in that domain. For ~20 years I used a a client-server scheduling program that I wrote entirely in Tcl/Tk. It required only minimal maintenance, in fact hadn't touched in over the last 10 years it was deployed.

The main difficulty has been marginal debugging support, but there are ways around it so it's not insurmountable.


Not sure if it's lunch has been eaten, but TCL was great for embedding and pretty widely used. So, to mix metaphors, it definitely had a few great meals before losing the spotlight.

I've seen it embedded in load balancers and used for test harnesses for embedded devices.

https://devcentral.f5.com/articles/irules-101-01-introductio...


I just purchased the new book by Ashok myself that covers Tcl (not Tk) 8.6. I have always thought Tcl a fun language and have decided to more of my daily stuff with it. I hang out on the developers list just to watch what is going on.


I use Tcl/Tk from time to time and it always seems the easy choice when I need a simple GUI and also need to make an executable for Windows (which is trivial with Freewrap).

Some of the things I've written are: a simple standby order input app, a Forms/Reports compiler (sends file to server, waits for response and informs user), and a Ghostscript wrapper for merging lots of PDFs into one file.


Awesome! I'm definitely going to be keeping it in mind as little projects come up that could use a GUI.


If I move out of shell for scripting, I go to Tcl. I also write little utilities with Tcl/Tk to do repetitive tasks that I do often.


TCL is really great for throwing syntax errors at runtime, having no real data structures, and an inconsistent API!


Better than throwing syntax errors at compile time; saves the programmer time. By the time errors happen, it's the customer's problem instead.


TCL is a fun language, I got my start in web development using OpenACS on aolserver with embedded TCL.

I don't want to go back.


Yup, Tcl is a really nice little language.


I ctrl+F searched these comments for the word "hipster", 0 matches. If this was about JavaScript though...


Perl is pretty much dead, I don't anyone still using that beside legacy code.


Nobody's using P6 for legacy code.


Does anybody use P6 for any serious code?


It's still pretty new, so the library infrastructure sucks (but getting better every day).

I'm only peripherally involved in the community, but at this point, it seems people are using it primarily for small utilities and such.


What kind of sane person who would use Perl ( 6 or older ) on a new project nowdays.


Lots of "sane" people would (at least for Perl 5). I know it's not "the new hotness", but Perl 5 is still an incredibly useful language that a lot of people know (present company included). There are extensive libraries available for doing a lot of things that need to be done in day-to-day business. There are plenty of perfectly legitimate places to use it in new development.

The Perl 5 porters development list has a decent amount of traffic ( http://www.nntp.perl.org/group/perl.perl5.porters/2017.html ). And Perl 5 still has a good number of active contributors ( https://www.openhub.net/p/perl/contributors ). Also, The CPAN has plenty of modules with daily activity ( http://search.cpan.org/recent ).

Even if you would not choose to do new development in Perl 5 (because of other language preferences, or whatever), that does not mean it is unreasonable (or "insane") for others to do so.

Whether Perl 6 will ever catch on, I don't know. The language has a lot of features that look interesting, and I think it could also be very useful. I personally have not had time to look at it much yet, but probably will when I don't have higher priority things in life preventing me from doing so.


Me, albeit a small project, it was really fun to work on; it was an interactive blog.


not true. There is still a lot of new Perl 5 code being written. Most of it is likely not very high profile, but it is still used a lot.


All I think about when I hear M:N thread multiplexing is "1992 is calling to have its stupid threads hacks back".

M:N is what you do when you don't have real kernel thread support. You hacked N user space threads per process (N:1) but then 1990 rolled in and cheap SMP with M processors started to spread across the land, so you needed N:M to take a bit of advantage of that.


Perl 6 has M:N thread multiplexing along with normal thread support. It's just that M:N thread multiplexing is easier to use because all of the hard work is done for you.


It's terrible that you put Erlang on a list with other languges/VMs.

>There’s no GIL, so unlike Those Other Languages

There is no GIL in EVM as well, but playing with the words you make it look like "all are the same". This does not deserve top HN and just stupid.

If your Perl6 VM is _so_ great, why didn't you mention anything about what's really important like preemptive scheduling which has been Erlang's unbeatable feature since the beginning? Less hype and more details, please.


FWIW, the author is very well-known and respected in the Erlang world for his work on Chicago Boss. Any perceived slights against Erlang are likely unintentional.


The author of the article likely hasn't dug much into the VM code yet.

https://github.com/MoarVM/MoarVM/#feature-overview




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: