Hacker News new | past | comments | ask | show | jobs | submit | theogravity's comments login

Is there a guide for how to use uv if you're a JS dev coming from pnpm?

I just want to create a monorepo with python that's purely for making libraries (no server / apps).

And is it normal to have a venv for each library package you're building in a uv monorepo?


If the libraries are meant to be used together, you can get away with one venv. If they should be decoupled, then one venv per lib is better.

There is not much to know:

- uv python install <version> if you want a particular version of python to be installed

- uv init --vcs none [--python <version>] in each directory to initialize the python project

- uv add [--dev] to add libraries to your venv

- uv run <cmd> when you want to run a command in the venv

That's it, really. Any bonus can be learned later.


There's also workspaces (https://docs.astral.sh/uv/concepts/projects/workspaces/) if you have common deps and it's possible to act on a specific member of the workspace as well.

That's one of the bonus I was thinking about. It's nice if you have a subset of deps you want to share, or if one dep is actually part of the monorepo, but it does require more to know.

Thanks. Why is the notion of run and tool separate? Coming from JS, we have the package.json#scripts field and everything executes via a `pnpm run <script name>` command.

Tool ?

Maybe you mean uv tool install ?

In that case it's something you don't need right now, uv tool is useful, but it's a bonus. It's to install 3rd party utilities outside of the project.

There is no equivalent to script yets, althought they are adding it as we speak.

uv run exec any command in the context of the venv (which is like a node_modules), you don't need to declare them prior to calling them.

e.g: uv run python will start the python shell.


I was looking at https://docs.astral.sh/uv/concepts/tools/#the-uv-tool-interf...

Edit: I get it now. It's like npm's `npx` command.


uvx is the npx equivalent, it's provided with ux, and also has some nice bonuses.

uv sync if you clone a github repo

uv run in the freshly cloned repo will create the venv and install all deps automatically.

You can even use --extra and --group with uv run like with uv sync. But in a monorepo, those are rare to use.


Thanks for the info.

I looked at the group documentation, but it's not clear to me why I would want to use it, or where I would use it:

https://docs.astral.sh/uv/concepts/projects/layout/#default-...

(I'm a JS dev who has to write a set of python packages in a monorepo.)


sync is something you would rarely use, it's most useful for scripting.

uv run is the bread and meat of uv, it will run any command you need in the project, and ensure it will work by synching all deps and making sure your command can import stuff and call python.

In fact, if you run a python script, you should do uv run python the_script.py,

It's so common uv run the_script.py will work as a shortcut.

I will write a series of article on uv on bitecode.dev.

I will write it so that it work for non python devs as well.


Did you mean group and not sync?

Really looking forward to the articles!


Sorry i misread and stayed on sync. Group and extras are for lib makers to create sets of optionals dependenacies. Groups are private ones for maintainers, extras are oublic one for users.

Agreed, if you don't know what Datadog is then you're probably not the target audience for this product.

Do you think if I don't know what datadog is, I am not the target audience for datadog?

Kinda? There aren't that many players in this niche and datadog is the "dog".

probably

Hi, I'm the author of LogLayer (https://loglayer.dev) for Typescript, which has integration with DataDog and competitors. Sift looks easy to integrate with since you have a TS library and the API is straightforward.

Would you like me to create a transport for it (I'm not implying I'd be charging to do this; it'd be free)?

The benefit of LogLayer is that they'd just use the loglayer library to make their log calls and it ships it to whatever transports they have defined for it. Better than having them manage two separate loggers (eg Sift and Pino for example) or write their own wrapper.


Hey, loglayer looks super cool! Would love to chat and set something up, send us an email at founders@runsift.com

Sent an e-mail!

killdozer? although "No one else was injured or killed,[1] in part due to timely evacuation orders"

https://en.wikipedia.org/wiki/Marvin_Heemeyer



Same, I have occupational lenses that are also focused to arms length, and it has made a huge difference for me as well when using it for reading things on my computer screens. It makes reading small text easier and feels crisp.

Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.

My provider calls them "computer glasses". It does not have blue light filtering as I do work with implementing web designs and color accuracy does matter to me.

I totally recommend computer glasses for anyone who works all day looking at a computer screen.

They would be a separate prescription / lens type (as in not progressive I think) compared to daily use glasses. I do have to swap to my daily use when not using my computer glasses outside of sitting and looking at a monitor.

Using my daily use for computer monitor reading doesn't feel "right" compared to my computer glasses. There is a clear difference between them.


>Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.

Mine are more useful that I anticipated when I'm not using them for work. I would advise against anybody driving with the wrong pair of glasses, but I can see significantly better with my occupational lenses than without. I would not trust them at night, but during the day I can see well enough I am not concerned about my driving. I don't intend to drive with them, but there has been the occasion here or there when I had to run somewhere quickly and forgot to swap my glasses.

It also helps that mine are progressives, so the very very top part of the lens is my "regular" prescription. I can use that to focus on something at a distance if necessary.

>They would be a separate prescription / lens type (as in not progressive I think) compared to daily use glasses. I do have to swap to my daily use when not using my computer glasses outside of sitting and looking at a monitor.

Like I mentioned above, mine are both occupational and progressive. I'd like to try non-progressive occupational lenses to see if I like them better, but I'm not convinced it would be worth the money.


Same. I've driven short distances sometimes to pick up lunch or something 5-10 minutes away because I forgot to switch my glasses. It wasn't ideal but perfectly doable.

I've only done it a handful of times, though. And also I wouldn't do so at night.


> Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.

I don't find that at all, personally. I wear my computer glasses almost all the time in the house and just let myself not try to focus on things. If anything it seems to be better than my normal distance lenses for eye strain, for me, because my eyes do try to focus with my normal lenses since it's supposed to be perfectly clear, where I know there's a good reason they're not in focus when I'm not wearing them.

My distance glasses have progressive lenses, which may be part of that, as there's different strength depending on where you're looking at in the glasses. I've been tempted to remove progressive lenses from my next pair, as I tend to take them off to read anyway, and then I'd get a flat prescription like I have on my computer lenses.


Me too. My progressive lenses give me eye strain and it is much worse at the computer. I have non-progressive lens for work and they’re much more comfortable. (Especially with my large monitor.)

Would love to speak with you for 20 mins to learn from your experience. If interested, ping me at jbornhorst [at] gmail [dot] com and I'll coordinate times.

The CSRF token is usually stored in a cookie. I guess one could try stealing the cookie assuming the CSRF token hasn't been consumed.

But if one's cookie happens to be stolen it can be assumed they already have access to your session in general anyways making CSRF moot.


This might be useful for just checking the general content of a chapter you're interested in if it hasn't been translated yet, but it's not clear if it handles things like varying fonts / sizes used to convey the emotion of spoken dialog, does consistent translation (eg does it remember stylistic choices it has made chapters before it), or handles tricky items that might be difficult to localize.

Also, how does it work, what is the tech behind it? Are you doing any of the training yourself?


Another thing I'm not sure machine translation can really "nail" is cultural context, or even little linguistic cues and other tidbits. I like when translators explain in the margins that one character is speaking in a certain register for XYZ reason, or that there's been a shift in a certain relationship signaled by a change in how they address each other, etc.

That said, I did just read a great series last night whose human translation ended right before the final two volumes, and hasn't been updated in nearly 8 years... so I may need to try some machine translation on those last two volumes just to see how things end.


Next.js doesn't play well with barrel packages (large packages that export everything into the main entrypoint file). It's a known issue (but rarely mentioned when you read about working with Next.js):

https://github.com/vercel/next.js/issues/48748

I've never used MUI, but assuming that MUI is a barrel package, and you do the following:

  import { Component } from "mui";
Next.js ends up compiling the entire package instead of just that component you need. If MUI has their components exported into separate files, an optimization would be:

  import { Component } from "mui/path/to/component";


I switched to Astro plus tailwind and never looked back. If you are interested though, imports are a pretty big thing in mui https://mui.com/material-ui/guides/minimizing-bundle-size/.


To be honest, if you're making a content site such as a marketing site or blog, it really doesn't matter what you use. You can use Next, Astro, Gastby, HTML/CSS etc. Doesn't matter.

Next.js excels when you have a complex app with interactions, SSR requirements, CSR requirements, backend requirements, etc.


I disagree. I’m building an audio player in Astro.

https://github.com/mayo-dayo/app


What can one do with a Gaia ID? I don't think the article went into the impact of having it.


Long time ago I discovered a similar google error, accessing youtube got me in other people accounts. I mean just going to a youtube will show me signed in into another user's account. It turned out there was a caching/database issue, and the isp/googlr mingled accounts. Here is the old report:"The issue has been replicated by the editorial teams of both itp.net and Windows Arabic magazine. In testing, the user profiles that were visible were for users that had logged into YouTube only a few hours previously, suggesting that the pages have been cached by either Google or Etisalat’s own servers, and were somehow being accessed in error through the cache.

Neither Google nor Etisalat have responded to request for comment at the time of writing.

The issue appears to be very similar to a problem which was reported by users of Kuwaiti ISP FASTtelco , who said that they were able to see other users Gmail accounts and other personal details, although this was later denied by the ISP'


After reading it a bit further, they searched for a service that exposed email via Gaia ID, and found it via Pixel Recorder.


Use it on the block user api and get an email address from what I understood.


But what else could definitely or potentially be done? It is an interesting question.


I think it's mainly a way to discover who's behind a youtube comment or video. Getting their email address often leaks information about them


get google maps reviews and search web archive for google plus and profiles.google.com snapshots


there's also our product, Airtop (https://www.airtop.ai/), which is under the scraping specialist / browser automation category that can generate screenshots too.


Hey I'm curious what your thoughts are on whether you need a full blown agent that moves the mouse and clicks to extract contents from webpages or a more simplistic tool that can just scrape pages + take screenshots and pass it through an LLM is generally pretty effective?

I can see niches cases likes videos or animations being better understood by an agent though.


Airtop is designed to be flexible, you can use it as part of a full-blown agent that interacts with webpages or as a standalone tool for scraping and screenshots.

One of the key challenges in scraping is dealing with anti-bot measures, CAPTCHAs, and dynamic content loading. Airtop abstracts much of this complexity while keeping it accessible through an API. If you're primarily looking for structured data extraction, passing pages through an LLM can work well, but for interactive workflows (e.g., authentication, multi-step navigation), an agent-based approach might be better. It really depends on the use case.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: