Hacker News new | past | comments | ask | show | jobs | submit login

Core dev Larry Hastings [0] puts it well. The cost-benefit case for this complicated language feature is limited.

"I dislike the syntax and semantics expressed in PEP 634. I see the match statement as a DSL contrived to look like Python, and to be used inside of Python, but with very different semantics. When you enter a PEP 634 match statement, the rules of the language change completely, and code that looks like existing Python code does something surprisingly very different. It also adds unprecedented new rules to Python, e.g. you can replace one expression with another in the exact same spot in your code, and if one has dots and the other doesn’t, the semantics of what the statement expresses changes completely. And it changes to yet a third set of semantics if you replace the expression with a single _.

I think the bar for adding new syntax to Python at this point in its life should be set very high. The language is already conceptually pretty large, and every new feature means new concepts one must learn if one is to read an arbitrary blob of someone else’s Python code. The bigger the new syntax, the higher the bar should become, and so the bigger payoff the new syntax has to provide. To me, pattern matching doesn’t seem like it’s anywhere near big enough a win to be worth its enormous new conceptual load."

This while we have elephants in the room such as packaging. Researching best practices to move away from setup.py right now takes you down a rabbit hole of (excellent) blog posts, and yet you still need a setup.py shim to use editable installs, because the new model simply doesn't yet support this fundamental feature.

I can't afford to spend days immersing myself in packaging to the point of writing a PEP, but would help pay someone to do it well. I can see no way to fund packaging efforts directly on the PSF donations page (edit: see comment). It's great to see Pip improving but there is still not even a coherent guide that I can find for packaging using up-to-date best practices. This appears to be because the best practices are currently slightly broken.

[0] https://discuss.python.org/t/gauging-sentiment-on-pattern-ma...




> This while we have elephants in the room such as packaging.

There is a part of me that wonders, at this point, if basically every new addition to the language itself is secretly just a medium for procrastinating on figuring out what to do about setup.py.

I'm not as down on this PEP as some other seem to be, mind. It's just that, when I think about my actual pain points with python, this language addition starts to look like a very, very fancy bike shed.


The people that work on packaging don't overlap much with those that work on the core language. The latter don't seem to care about it much, as far as I can tell.


Have you taken a look at [PEP 517](https://www.python.org/dev/peps/pep-0517/) ? It enables other tools to replace setup.py (e.g., poetry is pretty nice for making easy-to-package-and-publish pure python libraries).


I sort of wonder why I see so much users complaining about package management in Python, meanwhile I'm having a fantastic journey since 2008 with it, with over 50 published open source packages. "Sort of" because I'm suspecting that the users in question just do not want to integrate upstream changes continuously, so, they expect the package manager to help them procrastinating on dependency updates, which has proven to lead to disasters such as npm install, and I'm kind of worried that this is the direction they have been taking.

But I admit I use setupmeta for auto definition of the version, and it just makes setup.py even better, but that's basically the only thing I like to add to setup.py, because it simplifies package publishing scripts. I haven't found any feature in pip to verify the gpg signatures that it allows us to upload packages with (python setup.py sdist upload --sign).

As for pattern matching is not specific to Python, it's available in many other languages and is a joy to use in OCaml, I see no reason why Python would not have pattern matching.


Anything that breaks and changes semantics should not be allowed into the language. Let Python be Python, not a Frankenstein's monster of ideas like C++. If it were an idea that were Pythonic, you would not see the confusing examples I've seen in the comments. C++ is the poster child of trying to do too much with the language and it losing itself due to death-by-committee. It's very sad that Python has started down this road.


We seem to be seeing a paradigm clash. On one side, there are people who are concerned with whether or not a feature is desirable. On the other side, there are people who are concerned with whether or not a feature is desirable and also Pythonic.


> On one side, there are people who are concerned with whether or not a feature is desirable.

The thing is this: You can add something which, in isolation, seems desirable and positive -- but in the greater picture, is a net negative due to the complexity it adds.

People might say that those who do not like the pattern matching syntax are not obliged to use it. But when developing code in longer-running projects, far more code is read than written. Adding syntax, especially with complex edge cases, especially from languages which use concepts that are at the core quite alien to Pythons main concepts, adds a burden which is difficult to justify.


Very much so. I run into this with Clojure. It has so many different ways to skin every cat, each with its own unique blend of quirks, that it can be quite difficult to understand other people's code, and, by extension, use the language in a team setting.

That sort of experience leaves me thinking that this is a very dangerous turn to take for a language whose core ecological niche is, "Easy for professionals who don't have a BS in CS to understand and use productively." Lines 2 and 13 of PEP 20 are core to why Python, of all languages, came to dominate that niche. I am beginning to fear that the Python core team, being composed primarily of software engineers, is ill-equipped to properly understand that.


Yap. It doesn't make sense to destroy the language just to get in a particular feature. You don't need a language to do everything. It needs to be good at everything it's meant to be good at.


I very much want to agree with you. Only that I do not know any more what "Pythonic" is supposed to mean.

One thing that Larry Hastings refers to seems often to be underestimated - readability.

It seems nice to be able to do code golfing and use pattern matching to reduce an expression from maybe 50 lines to 10.

But what matters far more is that one can read code easily, without guessing, and without consulting definitions of edge cases. Even in smaller projects, one will read 10 times more code than one writes. In larger projects and as a senior programmer, that could be a factor 100 or 1000. Not that unusual to work one week through a bug in someone else's code, and fix it by changing a single line. As code becomes more complex, it becomes really important to understand exactly what it means, without guessing. This is key for writing robust, reliable and correct code (and this is perhaps why the ML and functional languages, which stress correctness, tend to be quite compact).

And while it might be satisfying puzzle-solving for smart and easily bored people, like you and me, to write that pattern matching code and reduce its length, it is just not feasible to read through all the PEPs describing surprising syntactic edge cases in a larger code base.


I can only agree. Compared to other languages I find Python increasingly difficult to reason about, mainly due to its dynamicity. If the language complexity increases as well from now on I don't think I will use Python unless absolutely necessary.

Meanwhile, Julia supports pattern matching due to macros: https://github.com/kmsquire/Match.jl


Unfortunately Match.jl has gone unmaintained for a while, but we do have MLStyle.jl

https://github.com/thautwarm/MLStyle.jl

https://thautwarm.github.io/MLStyle.jl/latest/syntax/pattern...


That looks great, thanks.


Some of the english documentation can be quite hard to read, but the code is very useful. Submitting PRs to improve the documentation is welcomed by maintainers.


Pattern matching is the brainchild of ML. Python, being a multi-paradigm language with the strong functional side, missed this simple in concept and powerful in practice language concept.


> Python, being a multi-paradigm language with the strong functional side

Coming back to that, just a reminder that lambdas in Python are still gimped, closures do not work as expected because of -- scoping, and core developers in Python 3 tried to remove with "map" and "filter" tools that are considered quite essential for functional programming.


I actually wish they had done so.

As someone who switches between Python and functional languages, I find Python's "map" and "filter" to be a trap, and have taken to scrupulously avoiding them. The problem is that I expect those functions to be pure, and, in Python, they aren't. They actually can't be, not even in principle, because their domain and range include a core datatype that cannot be interacted with in a pure manner: generators. A generator will change its own state every time you touch it. For example:

  >>> seq = (x for x in range(1, 11))
  >>> list(filter(lambda x: x % 2 == 0, seq))
  [2, 4, 6, 8, 10]
  >>> list(filter(lambda x: x % 2 == 1, seq))
  []
In a language that is a good fit for functional programming, the last statement would return [1, 3, 5, 7, 9], not an empty list. But Python is imperative to the core, so much so that I would argue that trying to use it as a functional language is like trying to drive screws with. . . not even a hammer. A staple gun, maybe?

(Which isn't to say that you can't successfully use some functional techniques in Python. But it's best done in a measured, pragmatic way.)


A good example why immutability by default seems to be the right thing - in Clojure, "seq" would not have been modified by the first filter expression:

    user=> (def seq_ (range 1 11))
    user=> (filter (fn [x] (== (mod x 2) 0)) seq_)
    (2 4 6 8 10)
    user=> (filter (fn [x] (== (mod x 1) 0)) seq_)
    (1 2 3 4 5 6 7 8 9 10)
or more concisely:

    user=> (filter even? seq_)
    (2 4 6 8 10)
    user=> (filter odd? seq_)
    (1 3 5 7 9)

And also an example why it does not work to go and grab one or another desirable feature from a functional language, they need to work together.

> (Which isn't to say that you can't successfully use some functional techniques in Python. But it's best done in a measured, pragmatic way.)

A great example how it is done right is Python's numpy package. The people who created that knew about functional languages and APL (which fits nicely in since Python's predecessor ABC had some APL smell). The obviously knew what they were doing, and created a highly usable combination of a general data type and powerful operations on it.


I found this very surprising so asked a Python-knowledgeable acquaintance who mentioned that this works as expected

    >>> seq = [x for x in range(1,11)]
    >>> list(filter(lambda x: x % 2 == 0, seq))
    [2, 4, 6, 8, 10]
    >>> list(filter(lambda x: x % 2 == 1, seq))
    [1, 3, 5, 7, 9]


Right. Because it's doing two different things. One is working with lists, the other is working with lazy calculations.

A common functional pattern is to do lazy calculations, so that you don't have to store every result in memory all at once. The subtext I'm getting at is that a language that has footguns that make it dangerous (for reasons of correctness, if not performance) to reuse and compose lazy calculations is a language that is fundamentally ill-suited to actual functional programming.

Which is fine! Python's already a great procedural-OO language. It's arguably the best procedural-OO language. Which is a big part of why it's taken over data science, business intelligence, and operations. Those are, incidentally, the domains where I feel just about the least need to program in a functional paradigm. And, in the domains where I do have a strong preference for FP, Python is already a poor choice for plenty of other reasons. (For example, the global interpreter lock eliminates one of the key practical benefits.) No amount of risky, scarring cosmetic surgery is going to change that.


OK I see, the very short sequence in your example was a stand-in for a potentially infinite one. I got nerd-sniped into understanding what was happening as I found the behavior surprising.


It is very surprising.

And that is sort of another layer to why I think that people trying to turn Python into a functional language should just simmer down. A core principle in functional programming is that you should be able to swap out different implementations of an abstraction without changing the correctness of the program. That's not really something that can be enforced by most functional languages, but it's at least a precedent they set in their standard libraries. But Python's standard library has a different way of doing things, and favors different approaches to abstraction. And those conventions make the Python ecosystem a hostile environment for functional programming.

Hylang is another interesting example. It's a cool language. But watching how it evolved is slightly disheartening. It sought to be a lisp for Python, but the compromises they needed to make to get the language to interact well with Python have hurt a lot of its lisp feel, and make it a kind of peculiar language to try to learn as someone with a lisp background. IIRC, Python's lack of block scoping was an elephant in the room for that project, too.


To be fair, that's a foot gun in haskell as well. Using lists non-linearly like this in haskell gives you the correct results but at a 10x performance tax or worse because it can't optimize into a loop anymore.


At least to me, a footgun goes beyond a mere performance gotcha. There's a whole category difference between, "If you do this, the compiler may not be able to optimize your code as well," and, "If you do this, your code may produce incorrect results."


> Python, being a multi-paradigm language with the strong functional side

I would doubt that. Surely, things like Numpy are written in a functional fashion, but Python relies very much on statements, iteration, things not being an expression, almost every named symbol except string and number literals being mutable, and there are no block-scoped name bindings which are essential to functional languages.

And the attempt to add the latter to Python might end in a far bigger train wreck than C++ is already.

Mixing OOP and functional style works, more or less, for Scala, but everyone agrees that Scala is a hugely complex language. And in difference to Python, it has immutable values.

What could turn out better would be to create a new language which runs interoperable in the same VM (much like Clojure runs alongside Java and can call into it). And that new language, perhaps with the file extension ".pfpy", would be almost purely functional, perhaps like a Scheme or Hy, without these dreaded parentheses. That would at least leverage Python's existing libraries.


> This while we have elephants in the room such as packaging. Researching best practices to move away from setup.py right now takes you down a rabbit hole of (excellent) blog posts, and yet you still need a setup.py shim to use editable installs, because the new model simply doesn't yet support this fundamental feature.

You can do editable installs with poetry, I do it every day.

Just run this: \rm -rv dist/; poetry build --format sdist && tar --wildcards -xvf dist/.tar.gz -O '/setup.py' > setup.py && pip3 install --prefix="${HOME}/.local/" --editable .

More details here: https://github.com/python-poetry/poetry/issues/34#issuecomme...


I do editable installs every day with setup.py – what is your point? Poetry is a very interesting third party solution to these issues, but not a best practice. Best practices matter in packaging.


Struggling a bit with Python and JS-related packaging stuff recently, I'm also wondering: in a clean-room environment, what does good Python packaging look like? Is it "get a single binary" stuff?

PyOxidizer feels like the big win, but a part of me wonders if the original sin of allowing aribtrary code execution during installs really stops a "wrapper" tool from getting this right. https://pyoxidizer.readthedocs.io/en/v0.9.0/index.html


The gold standard for dynamic interpreted language package management is Yarn, and to a lesser extent npm. They are both cross platform, allows for easy publishing and building of native code dependencies, and support each package having its own conflicting dependencies, which means you don't have to do SAT solving. Furthermore, the metadata is held outside of the package, so package selection is much quicker than Python which requires downloading the entire library first and parsing the requirements.


Nope, cpan still is the gold standard.

Packages are tested before installation, and there is no need for hacks like pyenv or broken ruby envs. It also plays nicely with vendor packaging. No sat solving needed, deps are resolved dynamically, powered by simple Makefiles, not special baroque and limited declarations syntax.


> support each package having its own conflicting dependencies

So that you don't have to integrate upstream changes continuously... Not a way to build a sane ecosystem if you ask me.

> package selection is much quicker than Python

So is it because it downloads multiple versions of the same dependencies that it's so much slower than pip? Or is there more?


Is there a difference between package management for dynamic/static languages? Because without this arbitrary distinction npm is pretty much around last place on a ranking between package managers.


Did you try the "Sponsor" link in pypi.org ? It will take you there: https://pypi.org/sponsor/

> The Python Software Foundation is receiving $407,000 USD to support work on pip in 2020. Thank you to Mozilla (through its Mozilla Open Source Support Awards) and to the Chan Zuckerberg Initiative for this funding!


They should give that to the meson author instead. He knows what he's doing and has a clue about C/C++.

Really, the whole distutils is a fancy way to produce a simple zip archive and put it in a well know location in site-packages.

meson has the build figured out, it's just a way of installing that archive with a bit of metadata.


Thanks, donated.


This gives the impression that Python, similar to C++. might have entered a competition between popular languages which one can accumulate the most popular features.

Obviously, pattern matching comes from the ML family and functional Lisps like Clojure. What makes it a difficult integration into Python is that in languages such as Rust, OCaml, Haskell, but also Racket and Clojure, almost everything is an expression, and name bindings are, apart from a few exceptions, always scoped. Consequently, pattern matching is an expression, not a statement.

A similar issue is that Python 3 tried to become more "lazy", similar to how Haskell and Clojure are - this is the reason for replacing some list results with generators, which is a subtle breaking change. Lazy evaluation of sequences is nice-to-have on servers but its importance in Haskell and Clojure comes from them being functional languages which are geared towards a "pure" (side-effect-free) style, in which they differ a lot from Python.

My impression is also that over time, Python has really absorbed a huge number of features from functional Lisp dialects. This might seem surprising. Well, here are some things that modern Lisps like SBCL, Schemes and functional languages on the one hand side, and Python 3 do have in common:

* a Read-Eval-Print Loop (REPL)

* strong dynamic typing

* automatic memory management

* memory safety

* exceptions

* a wide choice of built-in data types: lists, strings, vectors, arrays, dictionaries / hash maps, tuples, sets

* keyword arguments and optional arguments in functions

* handling of names in scopes and names spaces

* closures and lambda functions

* list comprehensions

* pattern matching (limited support for tuples and lists in Python)

* Unicode strings

* arbitrarily long integers

* complex numbers

* rational numbers

* number type are part of a hierarchical type hierarchy (numeric tower)

* empty sequences, containers and strings are logically false

* support for threads (but no real parallelism in Python)

* low-level bit operations (a bit limited in Python)

* easy way to call into C code

* type annotations

* if ... else can be used as an expression

* string formatting is a mini language

* hexadecimal, octal and binary literals

* standard functions for functional programming like map and filter

* support for OOP (e.g. by the Common Lisp Object System)

* support for asynchronous execution

What is notably missing from this list is parallelism (as opposed to concurrency); Python simply does not support it in a practical way, while some functional languages do support it extremely well.

The net result of this creeping featuritis appears to be that Python, which is still classified as "beginner friendly", is now actually significantly more complex than a language like Racket or Clojure, perhaps because these features are often not integrated that well. I even think that Rust, while clearly being targeted at a very different domain, is more streamlined and well-composed than Python.

What is also worth mentioning is that these functional languages have seen steady improvements in compilers and performance of generated code, with the result that Rust code is now frequently at least as fast as C code, SBCL is within arms reach of C, and Clojure and Racket are about in the same league as Java - while Python code in comparison continues to be extremely slow:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Has it absorbed features from functional Lisp dialects, or does it just have features in common?

Early Python was inspired by ABC and C. ABC has a Python-like REPL, strong dynamic typing, automatic memory management, memory safety, a nice collection of built-in types, arbitrarily long integers, and rational numbers. C has low-level bit operations, (an easy way to call into C code,) a ternary expression-level if-else operator (and make no mistake, Python's expression-if is a ternary operator that's distinct from the statement-if-else), and hexadecimal and octal number literals.

There is at least a little influence from functional Lisps (Mypy lists Typed Racket as one of its influences), but a lot of what you list was taken from different languages, or is distinctly un-Lisp-like in Python, or was in Python from the start rather than absorbed over time, or is just obvious enough to put in a very high-level language to be reinvented.

It's also important to distinguish between internal complexity and user complexity. Arbitrarily long integers are complex to implement, but easier to use than fixed-length integers. Even features that do have a lot of user-facing complexity can be very easy to use in the common case. Python is hideously complex if you explore all the tiny details, but I'm not sure that it's all that complex to use. But I haven't used Clojure and Racket so I can't really comment on them.

> I even think that Rust, while clearly being targeted at a very different domain, is more streamlined and well-composed than Python.

I think I agree. But Rust has the benefit of only dealing with a measly five years of backward compatibility. Python has accumulated complexity, but the alternative would have been stagnation. If Python hadn't significantly changed since 1996 it would be more streamlined but also dead.

> What is also worth mentioning is that these functional languages have seen steady improvements in compilers and performance of generated code, with the result that Rust code is now frequently at least as fast as C code

I don't think Rust suffers from the issues that make functional languages hard to compile, so that might be a bad example. In Rust code it's unambiguous where memory lives. It has functional features augmenting a procedural model, rather than a functional model that has to be brought down to the level of procedural execution. So it might be "merely" as hard to optimize as C++.


> C has low-level bit operations,

As an unrelated fun fact, Common Lisp has more low-level bit operations than C, such as "and complement of integer a with integer b" or "exclusive nor":

http://www.lispworks.com/documentation/HyperSpec/Body/f_loga...

It also has logcount, which is the counterpart to the implementation-specific popcount() in C.


> but a lot of what you list was taken from different languages, or is distinctly un-Lisp-like in Python, or was in Python from the start rather than absorbed over time, or is just obvious enough to put in a very high-level language to be reinvented.

Here, Python clearly borrows from functional languages. And there are basically two families of functional languages: Strongly and statically typed languages like ML, OCaml, Haskell, Scala, F#, and on the other hand, dynamically typed Lisps and Schemes.

My point is that all these adopted features are present in the latter category.


How many of these features are present in neither statically typed functional languages nor dynamically typed procedural languages?

My impression is that Python has a) a lot of bog-standard dynamic features, and b) a few functional features (like most languages nowadays).

Group a) overlaps with functional Lisps, but no more than with ABC and Perl and Lua, so functional Lisps are not a great reference point.

Group b) overlaps with functional Lisps, but no more than with ML and Haskell, or even modern fundamentally-procedural languages like Kotlin(?) and Rust, so functional Lisps still aren't a great reference point.

It's mostly parallel evolution. It can be interesting to compare Python to functional Lisps because similarities are similarities no matter where they come from.

But I don't think that functional Lisps neatly slot into an explanation as to why Python looks the way it does. In a world where functional Lisps didn't exist Python might not have looked all that different. In a world where ABC and Modula didn't exist Python would have looked very different, if it existed at all.


> Group b) overlaps with functional Lisps, but no more than with ML and Haskell, or even modern fundamentally-procedural languages like Kotlin(?) and Rust, so functional Lisps still aren't a great reference point.

Both of them stem from Lambda calculus. The difference between ML languages and Lisps is the type system. To do functional operations like map, foldl, filter, reduce in compiled ML-style languages with strong static typing, one needs a rather strong and somewhat complex type system. When you try that at home with a weaker type system, like C++ has for example, the result is messy and not at all pleasant to write.

Lisps/Schemes do the equivalent thing with strong dynamic typing, and good Lisp compilers doing a lot of type inference for speed.

> It's mostly parallel evolution. It can be interesting to compare Python to functional Lisps because similarities are similarities no matter where they come from.

Lisps (and, for the field of numerical computation, also APL and its successors) had and continue to have a lot of influence. They are basically at the origin of the language tree of functional programming. The MLs are a notable fork and apart from that there are basically no new original developments. I would not count that other languages like Java or C++ pick up some FP features like lambdas, too.

What's however interesting is the amount of features that Python 3 has now in common with Lisps. Lisps are minimalist languages - the have only a limited number of features which fit together extremely well.

And if all these features adopted were not basically arbitrary, unconnected, and easy to bolt-on, why has Python such a notably bad performance and -- similar as C++ -- such an explosion in complexity?


Map, filter and lambda were originally suggested by a Lisp programmer, so those do show functional Lisp heritage. (I don't know of similar cases.) But they're a small part of the language. Comprehensions are now emphasized more, and they come from Haskell, and probably SETL originally—no Lisp there.

> They are basically at the origin of the language tree of functional programming.

That's fair. But that only covers Python's functional features, which aren't that numerous.

> if all these features adopted were not basically arbitrary, unconnected, and easy to bolt-on

I never said they weren't! I just don't think they're sourced from functional Lisps.

>why has Python such a notably bad performance

Because it's very dynamic, not afraid to expose deep implementation details, and deliberately kept simple and unoptimized. In the words of Guido van Rossum: "Python is about having the simplest, dumbest compiler imaginable."

Even if you wanted to, it's hard to optimize when somewhere down the stack someone might call sys._getframe() and start poking at the variables twenty frames up. That's not quite a language design problem.

PyPy is faster than CPython but it goes to great lengths to stay compatible with CPython's implementation details. A while ago I ran a toy program that generated its own bytecode on PyPy, to see what would happen, and to my surprise it just worked. I imagine that constrains them. V8 isn't bytecode-compatible with JavaScriptCore, at least to my knowledge.

The most pressing problems with Python's performance have more to do with implementation than with high-level language design.

PHP is the king of arbitrary, unconnected, bolted-on features, and it's pretty fast nowadays. Not much worse than Racket, eyeballing benchmarksgame, and sometimes better.

> and -- similar as C++ -- such an explosion in complexity?

I'm not so sure that it does. I'm given to understand the problem with C++ is that its features compose badly and interact in nasty ways. Do Python's? Looking at previously new features, I mainly see people complaining about f-strings and the walrus operator, but those are simple syntactic sugar that doesn't do anything crazy.

Instead of an explosion in complexity, I think there's merely a steady growth in size. People complain that it's becoming harder to keep the whole language in your head, and that outdated language features pile up. I think those are fair concerns. But these new features don't make the language slower (it was already slow), and they don't complicate other features with their mere existence.

The growth isn't even that fast. Take a look at https://docs.python.org/3.9/whatsnew/3.9.html . I wouldn't call it explosive.

I don't know enough C++ to program in it, but the existence of operator= seems like a special kind of hell that nothing in Python compares to.


> Even if you wanted to, it's hard to optimize when somewhere down the stack someone might call sys._getframe() and start poking at the variables twenty frames up. That's not quite a language design problem.

It's hard to optimize only if you accept the tenet that it sys._getframe(), and all its uses, must continue to work exactly the same in optimized code.

Instead, you can just declare that that it (and any related sort of anti-pattern of the same ilk) won't work in optimized code. If you want the speed from optimized compiling of some code, then do not do to those things in that particular code.

The programmer can also be given fine-grained tools over optimization, so as to be able to choose how much is done where, on at least a function-by-function basis, if not statement or expression.

It's not written in stone that compiled code must behave exactly as interpreted code in every last regard, or that optimized code must behave as unoptimized code, in every regard. They behave the same in those ways which are selected as requirements and documented, and that's it.

In C in a GNU environment, I suspect your Glibc backtrace() function won't work very well if the code is compiled with -fomit-frame-pointer.

In the abstract semantics of C++, there are situations where the existence of temporary objects is implied. These objects are of a programmer-defined class type and can have constructors and destructors with side-effects. Yet, C++ allows compete freedom in optimizing away temporary objects.

The compiler could help by diagnosing situations, as much as possible, when it's not able to preserve this kind of semantics. Like if a sys._getframe call is being compiled with optimizations that rule it out, a warning could be issued that it won't work, and the generated code for it could blow up at run-time, if stepped on.

One way in which compiled code in a dynamic language could differ from interpreted code (or less "vigorously" compiled code) is safety. For that, you want some fine-grained, explicit switch, which expresses "in this block of code it's okay to make certain unsafe assumptions about values and types". Then, he optimizer removes checks from the generated code, or chooses unsafe primitives from the VM instruction set.

The code will then behave differently under conditions where the assumptions are violated. The unoptimized code will gracefully detect the problems, whereas the vigorously compiled code will behave erratically.

This entire view can nicely take into account programmer skill levels. Advanced optimization simply isn't foisted onto programmers of all skill levels, so then they have to grapple with issues they don't understand, with impaired ability to debug. You make it opt-in. People debug their programs to maturity without it and then gradually introduce it in places that are identified as bottlenecks.


> In C in a GNU environment, I suspect your Glibc backtrace() function won't work very well if the code is compiled with -fomit-frame-pointer.

Actually, backtraces work correctly without explicit framepointers (in a typical GNU environment using ELF+DWARF).

The general concept has existed in DWARF since version 2 in 1992. The mechanism used for this is known as Call Frame Information (CFI)[0][1] — not to be confused with Control Flow Integrity, which is unrelated.

Here's some example libgcc code that evaluates CFI metadata[2]; there's similar logic in the libunwind component of llvm[3].

Burning a register on a frame pointer is a big deal on i386 and somewhat less so on amd64; there are other platforms where the impact is even lower. So, just know that you don't have to include FPs to be able to get stack traces.

If you're interested in how to apply these directives to hand-written assembler routines, there are some nice examples in [0].

[0]: https://www.imperialviolet.org/2017/01/18/cfi.html

[1]: https://sourceware.org/binutils/docs/as/CFI-directives.html

[2]: https://github.com/gcc-mirror/gcc/blob/master/libgcc/unwind-...

[3]: https://github.com/llvm/llvm-project/blob/main/libunwind/src...


> I don't think Rust suffers from the issues that make functional languages hard to compile, so that might be a bad example.

The issue is that in functional languages, the compiler has more information and can reliably rely on more assumptions, thus it can make more optimized code. This is why also Common Lisp can compile to very fast code (in a few of the cited micro-benchmarks, faster than Java).


Yes, a lot of these features are hard to integrate into Python because it has peculiar (to say the least) namespaces.

You just need to follow the mailing lists [1] and see that many core developers themselves are caught by some namespace abnormality.

It is a pity that Greenspun's 10th rule is in full effect again and we're stuck with an inferior Lisp/ML, especially in the scientific sector.

[1] Or rather the archives, since they are now de facto corporate owned. There is no free discussion any more and people are afraid to post.


> and we're stuck with an inferior Lisp/ML, especially in the scientific sector.

You will love Julia.

Here are some links:

https://julialang.org/blog/2012/02/why-we-created-julia/

Julia: Dynamism and Performance Reconciled by Design (https://dl.acm.org/doi/pdf/10.1145/3276490) <= This is great and surprisingly approachable.

https://opensourc.es/blog/basics-multiple-dispatch/

And when you start finding things that you miss, Julia and the community got you with excellent Metaprogramming support.

https://github.com/thautwarm/MLStyle.jl

https://github.com/MikeInnes/Lazy.jl

https://github.com/jkrumbiegel/Chain.jl


I know really little about Python except that it was heavily adopted by Google.

However for C++, I think that corporatization is having a strong negative influence on the language, which leads to it being stuffed with poorly integrated features which nobody really overlooks any more.



Are you on the wrong article discussion? Or is this just a personal axe to grind?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: