Hacker News new | past | comments | ask | show | jobs | submit | more notatallshaw's comments login

The pros and cons of this was extensively discussed: https://discuss.python.org/t/pep-701-syntactic-formalization...

In summary the winning argument was it's always possible to for users to write unreadable code if they want and generally Python does not implement language level arbitrary restrictions to stop this. So, the benefits of a well-defined f-string syntax that is logically consistent outweigh the possibility some users will choose not to follow best practise.


So in short they are doing this just for it to end up in every coding style ban list?


By having a well-defined syntax, it allows:

* Other Python implementations to correctly and confidently implement f-strings

* CPython to use its regular parser to parse f-strings rather than have a special sub-parser

* Lots of edge-cases with f-string syntax to work consistently

As you could have read in the discussion rather than a bad-faith comment of "they are doing this just for <bad thing>", obviously no one wants to put a lot of effort into something for it to not have any benefit at all.


FYI pip is specifically not a package manager, it's a package installer.

Pip does not attempt to resolve dependency conflicts of already installed packages, only the ones it is currently trying to install. Nor does it try to manage the life cycle of installed packages, such as removing transitive dependencies never specified or no longer needed or create consistent environments with a lock file.

As package specifications have become better defined (rather than "whatever setup.py does") and are being better enforced (e.g. Pip will soon reject version numbers that are non-compliant) there are several attempts at writing full-fledged package managers (Rye, Huak, Pixi, etc.) and I'm sure once one of them gains critical mass it will replace Pip quickly.


Thanks for the info! That would explain why I spend so much time fighting dependency errors when I upgrade something ML related ...


You don't, this is an internal implementation detail of CPython, the interpreter decides what objects are immortal, e.g. None


> Now I need

You don't need to do anything, you can ignore all type hints

> use IF to guard it in CI in every file

Are you talking about "if TYPE_CHECKING:"?

Your other option is to put "from __future__ import annotations" at the top of the file, or wait for Python 3.13 when PEP 649 lands and type annotations become lazily evaluated.


Taking a look at pulp it just seems to be a solver abstraction API and all the real work is done by solver libraries written in lower-level languages. It looks like the default solver is COIN-OR CLP/CBC, and that's written in C++: https://github.com/coin-or/Cbc/tree/master/src

Maybe I'm misunderstanding something here and it's the abstraction API causing the problems, but it seems like it's up to the solver implementation to be efficient here?


The point is that the solvers can deal with larger problems than pulp. gurobi can deal with hundreds of thousands of expressions, but then pulp will just run out of memory at that point.


Well I was replying to this:

> I wish python and it’s libraries wasn’t so inefficient.

And it seems like Pulp isn't a solver by itself so comparing it to gurobi doesn't make a lot of sense?

I see gurobi also has a Python interface, so comparing it to that makes more sense. Does the gurobi Python interface run out of memory? I suppose it matters what part of Pulp is running out of memory, the underlying solver being used or is the API itself taking up the memory?


It prevents the data between you and the requester from being read or tampered with in-transit.

Before HTTPS was popular, I used to see ISPs inject tracking JavaScript or ads into arbitrary websites for their "customers".

So, in some sense yes, this preserves a legitimate version of your website for the requester.


And browsers would give a huge red alert for self-signed certs but say nothing about plaintext HTTP. Presumably that’s where the snark is coming from. Clearly the plaintext HTTP was less secure than self-signed certs but browsers perpetuated the “trusted” cert cartel.


Having a certificate from a CA like LE proves that you don't have a local MITM. The MITM would have to also somehow get between LE's servers and the website in order to get a trusted certificate. A self-signed certificate does not have that guarantee.


> And browsers would give a huge red alert for self-signed certs but say nothing about plaintext HTTP

This is largely because putting a huge red alert in front of plaintext HTTP pages would provoke a huge backlash from anti-HTTPS factions on the Internet.

HTTP pages should have a big red alert on them, and browsers are very slowly but surely moving in a direction of being HTTPS by default and HTTPS-only in the limited instances where it's possible. Arguably even in that world, a site claiming that the connection is secure and then offering a bad certificate is more worrying than a site that never claims the connection is secure in the first place. But ideally, eventually, we hope that the vast majority of the web is using certificates, and that visiting an HTTP-only page should be a rare event, possibly with some kind of warning in front of it.

Browsers have at the very least gotten rid of the SSL green padlock and have de-emphasized certificate origin in their presentation, and HTTP-only pages at least get labeled as insecure in modern browsers. That's a step in the right direction. But yeah, it's tough to treat HTTP-only pages the way they should be treated because a bunch of Internet users who dismiss MITM attacks will cry murder if browsers do so.

And of course absent a bunch of infrastructure and pinning capabilities and authentication mechanisms that don't exist for browsers, under current usage self-signed certificates don't really prove anything about the security of your connection.


Why does everything need to be secure? A random blog doesn't need to be secure. Or a random personal website. In fact, regular HTTP is preferable there because it's faster, so it consumes less power and it can run on lower-spec machines. (No need to decrypt anything.)


Case in point.

This comment is the reason why browsers don't currently display giant warnings in front of HTTP pages even though they do arguably imply even less security than self-signed certificates. It has nothing to do with a browser conspiracy or narrative about "trusted" certs; browsers have largely been moving in a positive direction on that front.


> It's relatively simple to make the GIL go away: just compile to some VM that has a good concurrent garbage collector would be one approach

Sure, if you don't mind paying a 50-90% performance impact on single threaded performance or completely abandon C-API compatibility and have C extensions start from scratch then there are simple approaches.

If you look at any example in the past to remove the GIL you would see that keeping these two requirements of not having terrible single threaded performance and not having almost a completely new C-API is actually very complex and takes a lot of expertise to implement.


This might be a dumb question, but why would removing the GIL break FFI? Is it just that existing no-GIL implementations/proposals have discarded/ignored it, or is there a fundamental requirement, e.g. C programs unavoidably interact directly with the GIL? (In which case, couldn't a "legacy FFI" wrapper be created?) I know that the C-API is only stable between minor releases [0] compiled in the same manner [1], so it's not like the ecosystem is dependent upon it never changing.

I cannot seem to find much discussion about this. I have found a no-GIL interpreter that works with numpy, scikit, etc. [2][3] so it doesn't seem to be a hard limit. (That said, it was not stated if that particular no-GIL implementation requires specially built versions of C-API libs or if it's a drop-in replacement.)

[0]: https://docs.python.org/3/c-api/stable.html#c-api-stability

[1]: https://docs.python.org/3/c-api/stable.html#platform-conside...

[2]: https://github.com/colesbury/nogil

[3]: https://discuss.python.org/t/pep-703-making-the-global-inter...


> C programs unavoidably interact directly with the GIL?

Bingo. They don't have to, but often the point of C extensions is performance, which usually means turning on parallelism. E.g. Numpy will release the GIL in order to use machine threads on compute-heavy tasks. I'm not worried about the big 5 (numpy, scipy, pandas, pytorch, and sklearn), they have enough support that they can react to a GILectomy. It's everyone else that touches the GIL but may not have the capacity or ability to update in a timely manner.

I don't think this is something which can be shimmed either or ABI-versioned either. It's deeeep and touches huge swaths of the cpython codebase.


Thanks, that explains a lot. Sounds like a task that would have to be done in Python 4, if ever it exists.


> or is there a fundamental requirement, e.g. C programs unavoidably interact directly with the GIL?

Both C programs can use the GIL for thread safety and can make assumptions about the safety of interacting with a Python object.

Some of those assumptions are not real guarantees from the GIL but in practise are good enough, they would no longer be good enough in a no-GIL world.

> I know that the C-API is only stable between minor releases [0] compiled in the same manner [1], so it's not like the ecosystem is dependent upon it never changing.

There is a limited API tagged as abi3[1] which is unchanging and doesn't require recompiling and any attempt to remove the GIL so far would break that.

> so it's not like the ecosystem is dependent upon it never changing

But the wider C-API does not change much between major versions, it's not like the way you interact with the garbage collector completely changes causing you to rethink how you have to write concurrency. This allows the many projects which use Python's C-API to relatively quickly update to new major versions of Python.

> I have found a no-GIL interpreter that works with numpy, scikit, etc. [2][3] so it doesn't seem to be a hard limit.

The version of nogil Python you are linking is the product of years of work by an expert funded to work full time on this by Meta, the knowledge is sourcing many previous attempts to remove the GIL including the "gilectomy". Also you are linking to the old version based on Python 3.9, there is a new version based on Python 3.12[2]

This strays away from the points I was making, but with this specific attempt to remove the GIL if it is adopted it is unlikely to be switched over in a "big bang", e.g. Python 3.13 followed by Python 4.0 with no backwards compatibility on C extensions. The Python community does not want to repeat the mistakes of the Python 2 to 3 transition.

So far more likely is to try and find a way to have a bridge version that supports both styles of extensions. There is a lot of complexity in this though, including how to mark these in packaging, how to resolve dependencies between packages which do or do not support nogil, etc.

And even this attempt to remove the GIL is likely to make things slower in some applications, both in terms of real-world performance as some benchmarks such as MyPy show a nearly 50% slowdown and there may be even worse edge cases not discovered yet, and in terms of lost development as the Faster CPython project will unlikely be able to land a JIT in 3.13 or 3.14 as they plan right now.

[1]: https://docs.python.org/3/c-api/stable.html#c.Py_LIMITED_API [2]: https://github.com/colesbury/nogil-3.12


Also there had been a growing trend for most popular packages to offer precompiled wheels on PyPI instead of just sdist releases.

This meant that people who had moved to Conda because they couldn't get Pip to install important packages into their environment took another look and found that actually they could get things installed using Pip now.

At the same time Pip also got a resolver allowing you to have install time confidence you're not installing conflicting package, and recently (Pip 23.1+) the resolver's backtracking got pretty good.

That said Conda mostly solved this (and once mamba is the default resolver engine things will be really fast), Pip is not ever going to be a package manager, and Poetry still isn't an environment manager, and most other Python package/installer alternatives to Conda won't do things like install your Jupyterlab's nodejs dependency.

After many years I now almost exclusively use Pip to install into an environment, but still nothing beats Conda for bootstraping the non-Python-package requirement's (such as Python itself) nor for getting things working when you are in a weird environment that you can't install OS dev libraries into.


Is Conda actually moving towards making mamba the default? Last I heard, they were distinctly uninterested in that, since mamba is implemented in C++, and they would rather rely on their own slow Python code, which they can more easily modify.


Yes they are, it's been integrated and stable in conda since last year, you can turn it on with a solver config set: https://www.anaconda.com/blog/a-faster-conda-for-a-growing-c...


`rotary_embedding_torch` has not defined any build requirements hence your error: https://github.com/lucidrains/rotary-embedding-torch. You therefore need to install `numpy` before installing `rotary_embedding_torch`.

This is bad, `rotary_embedding_torch` as a package is not in a high enough quality to put as a requirement.

The good news is Pip 23.1+ is forcing the issue, `rotary_embedding_torch` will fail even if you have `numpy` installed because builds by default take place in an isolated environment and you *must* define any build requirements you have. This should force the quality of packages in the Python ecosystem to improve and no longer have this error.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: