That's one of the bonus I was thinking about. It's nice if you have a subset of deps you want to share, or if one dep is actually part of the monorepo, but it does require more to know.
Thanks. Why is the notion of run and tool separate? Coming from JS, we have the package.json#scripts field and everything executes via a `pnpm run <script name>` command.
sync is something you would rarely use, it's most useful for scripting.
uv run is the bread and meat of uv, it will run any command you need in the project, and ensure it will work by synching all deps and making sure your command can import stuff and call python.
In fact, if you run a python script, you should do uv run python the_script.py,
It's so common uv run the_script.py will work as a shortcut.
I will write a series of article on uv on bitecode.dev.
I will write it so that it work for non python devs as well.
Sorry i misread and stayed on sync. Group and extras are for lib makers to create sets of optionals dependenacies. Groups are private ones for maintainers, extras are oublic one for users.
Hi, I'm the author of LogLayer (https://loglayer.dev) for Typescript, which has integration with DataDog and competitors. Sift looks easy to integrate with since you have a TS library and the API is straightforward.
Would you like me to create a transport for it (I'm not implying I'd be charging to do this; it'd be free)?
The benefit of LogLayer is that they'd just use the loglayer library to make their log calls and it ships it to whatever transports they have defined for it. Better than having them manage two separate loggers (eg Sift and Pino for example) or write their own wrapper.
Same, I have occupational lenses that are also focused to arms length, and it has made a huge difference for me as well when using it for reading things on my computer screens. It makes reading small text easier and feels crisp.
Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.
My provider calls them "computer glasses". It does not have blue light filtering as I do work with implementing web designs and color accuracy does matter to me.
I totally recommend computer glasses for anyone who works all day looking at a computer screen.
They would be a separate prescription / lens type (as in not progressive I think) compared to daily use glasses. I do have to swap to my daily use when not using my computer glasses outside of sitting and looking at a monitor.
Using my daily use for computer monitor reading doesn't feel "right" compared to my computer glasses. There is a clear difference between them.
>Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.
Mine are more useful that I anticipated when I'm not using them for work. I would advise against anybody driving with the wrong pair of glasses, but I can see significantly better with my occupational lenses than without. I would not trust them at night, but during the day I can see well enough I am not concerned about my driving. I don't intend to drive with them, but there has been the occasion here or there when I had to run somewhere quickly and forgot to swap my glasses.
It also helps that mine are progressives, so the very very top part of the lens is my "regular" prescription. I can use that to focus on something at a distance if necessary.
>They would be a separate prescription / lens type (as in not progressive I think) compared to daily use glasses. I do have to swap to my daily use when not using my computer glasses outside of sitting and looking at a monitor.
Like I mentioned above, mine are both occupational and progressive. I'd like to try non-progressive occupational lenses to see if I like them better, but I'm not convinced it would be worth the money.
Same. I've driven short distances sometimes to pick up lunch or something 5-10 minutes away because I forgot to switch my glasses. It wasn't ideal but perfectly doable.
I've only done it a handful of times, though. And also I wouldn't do so at night.
> Using it outside of its intended distance will cause eye strain since your eyes won't be able to focus properly.
I don't find that at all, personally. I wear my computer glasses almost all the time in the house and just let myself not try to focus on things. If anything it seems to be better than my normal distance lenses for eye strain, for me, because my eyes do try to focus with my normal lenses since it's supposed to be perfectly clear, where I know there's a good reason they're not in focus when I'm not wearing them.
My distance glasses have progressive lenses, which may be part of that, as there's different strength depending on where you're looking at in the glasses. I've been tempted to remove progressive lenses from my next pair, as I tend to take them off to read anyway, and then I'd get a flat prescription like I have on my computer lenses.
Me too. My progressive lenses give me eye strain and it is much worse at the computer. I have non-progressive lens for work and they’re much more comfortable. (Especially with my large monitor.)
Would love to speak with you for 20 mins to learn from your experience. If interested, ping me at jbornhorst [at] gmail [dot] com and I'll coordinate times.
This might be useful for just checking the general content of a chapter you're interested in if it hasn't been translated yet, but it's not clear if it handles things like varying fonts / sizes used to convey the emotion of spoken dialog, does consistent translation (eg does it remember stylistic choices it has made chapters before it), or handles tricky items that might be difficult to localize.
Also, how does it work, what is the tech behind it? Are you doing any of the training yourself?
Another thing I'm not sure machine translation can really "nail" is cultural context, or even little linguistic cues and other tidbits. I like when translators explain in the margins that one character is speaking in a certain register for XYZ reason, or that there's been a shift in a certain relationship signaled by a change in how they address each other, etc.
That said, I did just read a great series last night whose human translation ended right before the final two volumes, and hasn't been updated in nearly 8 years... so I may need to try some machine translation on those last two volumes just to see how things end.
Next.js doesn't play well with barrel packages (large packages that export everything into the main entrypoint file). It's a known issue (but rarely mentioned when you read about working with Next.js):
I've never used MUI, but assuming that MUI is a barrel package, and you do the following:
import { Component } from "mui";
Next.js ends up compiling the entire package instead of just that component you need. If MUI has their components exported into separate files, an optimization would be:
import { Component } from "mui/path/to/component";
To be honest, if you're making a content site such as a marketing site or blog, it really doesn't matter what you use. You can use Next, Astro, Gastby, HTML/CSS etc. Doesn't matter.
Next.js excels when you have a complex app with interactions, SSR requirements, CSR requirements, backend requirements, etc.
Long time ago I discovered a similar google error, accessing youtube got me in other people accounts. I mean just going to a youtube will show me signed in into another user's account. It turned out there was a caching/database issue, and the isp/googlr mingled accounts. Here is the old report:"The issue has been replicated by the editorial teams of both itp.net and Windows Arabic magazine. In testing, the user profiles that were visible were for users that had logged into YouTube only a few hours previously, suggesting that the pages have been cached by either Google or Etisalat’s own servers, and were somehow being accessed in error through the cache.
Neither Google nor Etisalat have responded to request for comment at the time of writing.
The issue appears to be very similar to a problem which was reported by users of Kuwaiti ISP FASTtelco , who said that they were able to see other users Gmail accounts and other personal details, although this was later denied by the ISP'
there's also our product, Airtop (https://www.airtop.ai/), which is under the scraping specialist / browser automation category that can generate screenshots too.
Hey I'm curious what your thoughts are on whether you need a full blown agent that moves the mouse and clicks to extract contents from webpages or a more simplistic tool that can just scrape pages + take screenshots and pass it through an LLM is generally pretty effective?
I can see niches cases likes videos or animations being better understood by an agent though.
Airtop is designed to be flexible, you can use it as part of a full-blown agent that interacts with webpages or as a standalone tool for scraping and screenshots.
One of the key challenges in scraping is dealing with anti-bot measures, CAPTCHAs, and dynamic content loading. Airtop abstracts much of this complexity while keeping it accessible through an API. If you're primarily looking for structured data extraction, passing pages through an LLM can work well, but for interactive workflows (e.g., authentication, multi-step navigation), an agent-based approach might be better. It really depends on the use case.
I just want to create a monorepo with python that's purely for making libraries (no server / apps).
And is it normal to have a venv for each library package you're building in a uv monorepo?
reply