I had quite a few of those animal shirts, along with many Stonehenge party shirts. Gave most of them away since I wear collared shirts now that I'm a grown-up.
True but I don't remember it being nearly as convenient to distribute those modules as it still required the whole build environment on the target and you still had to deal with perls exceptionally efficient but ancient and cumbersome object and type system.
XS wasn't _that_ bad once you got the hang of it; anyways, but I do remember ruby 1.6 coming out and being blown away by how improved the experience of creating distributable C modules was. The class system was flat and easy to access, you could map ruby language concepts into C almost directly, and the garbage collection system was fully accessible.
perl 6 started being discussed right around this time and I think it was clear in the early years that it wasn't going to try to compete on these grounds at all instead focusing on more abstract and complex language features.
Anyways.. even seeing your name just brings me back to that wonderful time in my life, so don't get me wrong, I loved perl, but that was my memory of the time and why I think I finally just walked away from perl entirely.
It's not a race. Perl got there fast by basically not giving a damn about anything.
Perl (talking about Perl 5, don't know anything about Raku, don't want to know anything about Raku) simply treats strings as sequences of numbers without requiring numbers to be in the 8-bit range. This makes it easy to say that those numbers could in principle be Unicode codepoints. The problem is that the actual assumptions about what those numbers represent are implicit in programmers' minds, and not explicit in the language, much less enforced in any way. The assumptions shift as strings are passed between different libraries, and sometimes different programmers working on the same codebases have different ideas. Perl will happily do things like encode an already-encoded string, or decode an already-decoded string, or concatenate an encoded string with an unencoded string, or reverse a utf-8 string by reversing the encoded byte sequence, etc. etc. So it's easier in Perl than in any other language I've ever used to end up with byte salad.
It'll take you, let's say, the first few years of your Perl career, involving painstaking testing of everything you do with nontrivial characters, to truly grok all of that. But the problem is: You're not alone in the world. If you work on a nontrivially-sized project in the real world that heavily utilizes Perl, then byte-salad will be what you will get as input. And byte-salad will be what you will produce as output. It is frustrating as hell.
Unicode was a pretty painful matter in the transition from Python 2 to Python 3, but Python's approach means that the Python ecosystem is now pretty usable with Unicode. This is not the case with Perl at all.
I've had the complete opposite experience. If I need to do more with non-ascii text then treat it as an opaque blob, I still haven't found anything better or easier than perl to do it in.
This is a fair observation. At most, my FLOSS Weekly fame was still propelling my career, but I also gave that up after 13 years. It took a while to find my Dart/Flutter groove, but here I am.
Indeed, I started programming early (age 9 or so) and have always seemed to have a talent for programming, but also just a sense that it's fun to work out how to express steps in terms of smaller reusable steps.
reply