Hacker News new | past | comments | ask | show | jobs | submit | fluidcruft's comments login

He asked for no wifi, was sold a no wifi dishwasher and then later changed his mind and wanted wifi after installing it. Where's the misrepresentation?

Haha Magic ToDo is fun. I got a reasonable set of steps for a project and when I'm in the "whatever, just tell me what to do" overwhelmed/surrender mode this would be great. Also love that subtasks can be easily broken down into subsubtasks with a click.

Would love this as a todoist extension for brainstorming subtasks.


uv is open source

https://docs.astral.sh/uv/reference/policies/license/

There is not true vs untrue open source unless perhaps you intend copyleft, but that has nothing to do with whether or not there is corporate backing. Even GNU itself has had corporate backing for its compiler work and other utilities.


I assume you've seen this:

https://docs.astral.sh/uv/guides/integration/pytorch/

If the platform (OS) solution works for you that's probably the easiest. It doesn't for me because I work on multiple Linux boxes with differing GPUs/CUDAs. So I've use the optional dependencies solution and it's mostly workable but with an annoyance that uv sync forgets the --extra that have been applied in the venv so that if you "uv add" something it will uninstall the installed torch and install the wrong one until I re-run uv sync with the correct --extra again. (uv add with --extra does something different) And honestly I appreciate not having hidden venv states but it is a bit grating.

There are some ways to setup machine/user specific overrides with machine and user uv.toml configuration files.

https://docs.astral.sh/uv/configuration/files/

That feels like it might help but I haven't figured out how to configure get that to help it pick/hint the correct torch flavor for each machine. Similar issues with paddlepaddle.

Honestly I just want an extras.lock at this point but that feels like too much of a hack for uv maintainers to support.

I have been pondering whether nesting uv projects might help so that I don't actually build venvs of the main code directly and the wrapper depends specifically on certain extras of the wrapped projects. But I haven't looked into this yet. I'll try that after giving up on uv.toml attempts.


Private industry can always build on open source as well. The just work on the parts that don't involve reinventing the wheel.

What a hateful thing to say.

I don't see any hate in it, it seemed like a post talking about sympathy for hateful people. They even said they didn't want the hateful people to be miserable. That's really nice.

I love how the ignorant go out of their way to prove their ignorance. Their words are worth less than the fleeting vibrations they emit into the ether.

All that matters in this world is our compassionate service to humanity. Our happiness, peace and contentment are completely correlated to it.

Justice is in the fabric of this universe, though most people are too ignorant and selfish to realize the truth all around them.

The truth is hidden behind the door of compassion, which we all have the choice to walk through or avoid entirely. Choose your destiny well, for you will reap what you sow.


If one wants to read flowery story arcs like some sort of renfaire fourth-wall instant karma narrator on a sitcom, then perhaps try Trump 2.0 as punishment Biden and the DNC deserves.

> try Trump 2.0 as punishment Biden and the DNC deserves.

It’s a shame you are being downvoted for this comment because it’s spot on.

Trump v.1 was an indictment on the DNC for running HRC who was a dreadful candidate.

Let’s face it, in 2020 Biden was an over the hill terrible candidate who every other time he ran for president was considered a joke. Biden’s only benefit was that he was “not Trump” during a pandemic. If no pandemic Trump would have won 2020 easily.

DNC repeats their mistakes in 2024, with their two dreadful candidates. 1) The obviously diminished Biden which became apparent that the democrats were intentionally hiding his decline. Then 2) Harris, who was so unpopular within her own party that she had to drop her run before the first primary votes for 2020.

I long for one decent presidential candidate from the democrats. I’d love to be able to vote again after 3 presidential cycles where I could not.


Language encodes what people need it to encode to be useful. I heard of an example of colors--there are some languages that don't even have a word for blue.

https://blog.duolingo.com/color-words-around-the-world/


Actual companies also get sold and churned into shit. See LastPass for example.

How are you separating the efficiency of the architecture from the efficiency of the substrate? Unless you have a brain made of transistors or an LLM made of neurons how can you identify the source of the inefficiency?

You can't but the transistor-based approach is the inefficient one, and transistors are pretty good at efficiently doing logic, so either there's no possible efficient solution based on deterministic computation, or there's tremendous headroom.

I believe human and machine learning unify into a pretty straightforward model and this shows that what we're doing that ML doesn't can be copied across, and I don't think the substrate is that significant.


I generally agree but one thing I find very frustrating (i.e. have not figured out yet) is how deal with extras well, particularly with pytorch. Some of my machines have GPU, some don't and things like "uv add" end up uninstalling everything and installing the opposite forcing a resync with the appropriate --extra tag. The examples in the docs do things like CPU on windows and GPU on Linux but all my boxes are linux. There has to be a way to tell it that "hey I want --extra GPU" always on this box. But I haven't figured it out yet.

Getting the right version of PyTorch installed to have the correct kind of acceleration on each different platform you support has been a long-standing headache across many Python dependency management tools, not just uv. For example, here's the bug in poetry regarding this issue: https://github.com/python-poetry/poetry/issues/6409

As I understand it, recent versions of PyTorch have made this process somewhat easier, so maybe it's worth another try.


uv actually handles thr issues described there very well (uv docs have have a page showing a few ways to do it). The issue for me is uv has massive amnesia about which one was selected and you end up trashing packages because of that. uv is very fast at thrashing though so it's not as bad as if poetry were thrashing.

I end up going to the torch website and they have a nice little UI I can click what I have and it gives me the pip line to use.

That's fine if you are just trying to get it running on your machine specifically, but the problems come in when you want to support multiple different combinations of OS and compute platform in your project.

I could see this information on the website being encoded in some form in pypi such that it could be updated to support various platforms.

On nvidia jetson systems, I always end up compiling torchvision, while torch always comes as a wheel. It seems so random.

It sounds like you’re just looking for dependency groups? uv supports adding custom groups (and comes with syntactic sugar for a development group

It is... but basically it need to remember which groups are sync'd. For example if you use an extra, you have to keep track of it constantly because sync thrashes around between states all the time unless you play close and tedious attention. At least I haven't figured out how to make it remember which extras are "active".

    uv sync --extra gpu
    uv add matplotlib # the sync this runs undoes the --extra gpu
    uv sync # oops also undoes all the --extra
What you have to do to avoid this is to remember to use --no-sync all the time and then meticulously manually sync while remembering all the extras that I do actually currently want:

    uv sync --extra gpu --extra foo --extra bar
    uv add --no-sync matplotlib
    uv sync --extra gpu --extra foo --extra bar
It's just so... tedious and kludgy. It needs an "extras.lock" or "sync.lock" or something. I would love it if someone tells me I'm wrong and missing something obvious in the docs.

To make the change in your environment:

1. Create or edit the UV configuration file in one of these locations:

- `~/.config/uv/config.toml` (Linux/macOS)

- `%APPDATA%\uv\config.toml` (Windows)

2. Add a section for default groups to sync:

```toml

[sync]

include-groups = ["dev", "test", "docs"] # Replace with your desired group names

```

Alternatively, you can do something similar in pyproject.toml if you want to apply this to the repo:

```toml

[tool.uv]

sync.include-groups = ["dev", "test", "docs"] # Replace with your desired group names

```


Thank you! That's good to know. Unfortunately it doesn't seem to work for "extras". There may be some target other than sync.include-groups but I haven't found it yet.

What I am struggling with is what you get after following the Configuring Accelerators With Optional Dependencies example:

https://docs.astral.sh/uv/guides/integration/pytorch/#config...

Part of what that does is set up rules that prevent simultaneously installing cpu and gpu versions (which isn't possible). If you use the optional dependencies example pyproject.toml then this is what happens:

    $ uv sync --extra cpu --extra cu124
    Using CPython 3.12.7
    Creating virtual environment at: .venv
    Resolved 32 packages in 1.65s
    error: Extras `cpu` and `cu124` are incompatible with the declared conflicts: {`project[cpu]`, `project[cu124]`}
And if you remove the declared conflict, then uv ends up with two incompatible sources to install the same packages from

    uv sync --extra cpu --extra cu124
    error: Requirements contain conflicting indexes for package `torch` in all marker environments:
    - https://download.pytorch.org/whl/cpu
    - https://download.pytorch.org/whl/cu124
After your comment I initially thought that perhaps the extras might be rewritten as group dependencies somehow to use the ~/.config/uv/config.toml but according to the docs group dependencies are not allowed to have conflicts with each other and must be installable simultaneously (makes sense since there is an --all-groups flag). That is you must be able to install all group dependencies simultaneously.



I haven't tried it yet but that looks like exactly what I've been missing.

You can control dependencies per platform

https://docs.astral.sh/uv/concepts/projects/dependencies/#pl...

Not sure if it's as granular as you might need


This happened to me too, that is why I stopped using it for ML related projects and stuck to good old venv. For other Python projects I can see it being very useful however.

I'm not sure if I got your issue, but I can do platform-dependent `index` `pytorch` installation using the following snippet in `pyproject.toml` and `uv sync` just handles it accordingly.

[tool.uv.sources] torch = [{ index = "pytorch-cu124", marker = "sys_platform == 'win32'" }]


Some Windows machines have compatible GPUs while others don't, so this doesn't necessarily help. What is really required is querying the OS for what type of compute unit it has and then installing the right version of an ML library, but I'm not sure that will be done.

Even without query, just setting an environment variable or having remember which extras are already applied to the already synced .venv some way.

i use uv+torch+cuda on linux just fine,never used the extra flag, i wonder what's the problem here?

Getting something that works out of the box on just your computer is normally fine. Getting something that works out of the box on many different computers with many different OS and hardware configurations is much much harder.

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: