FWIW, It appears one of the people who saw this post asked in a comment in an issue on GitHub [1]:
> @seproDev unrelated, but what you think about https://news.ycombinator.com/item?id=42040600
.. and one of the main maintainers (ranked #14th by #commits, but a recently active maintainer) replied the following:
> False positive in virus total. Calling yt-dlp without any arguments makes no web requests.
> To expand a bit more. Our releases are built with github runners and they report back the sha hash during build. https://github.com/yt-dlp/yt-dlp/actions/runs/11656153929 for the release from yesterday
> You can see the commit that was built, what we merged in the last couple days, and the hash of the resulting files to check against the files in the release section.
> Those network requests are likely just other processes on the machine. I remember windows executable would regularly show microsoft servers in the "connections made" list due to windows update and telemetry still running.
Edit: I'd caution against spamming the maintainers though (not caution you specifically), the possibility of that happening is what swayed me to not post the link originally.
They implemented the censoring of "Claude" and "Anthropic" using the system prompt?
Shouldn't they have used simple text replacement? they can buffer the streaming response on the server and then .replace(/claude/gi, "Llama").replace(/anthropic/gi, "Meta") on the streaming response while streaming it to the client.
Edit: I realized this can be defeated, even when combined with the system prompt censoring approach.
For example when given a prompt like this: tell me a story about a man named Claude...
It would respond with: once upon a time there was a man called Llama...
> Shouldn't they have used simple text replacement?
They tried that too but had issues.
1) Their search and replace only did it on the first chunk of the returned response from Claude.
2) People started asking questions that had Claude as the answer like "Who composed Clair de lune?" for which the answer is supposed to be "Claude Debussy" which of course got changed to Llama Debussy, etc.
It's been one coverup-fail after another with Matt Shumer and his Reflection scam.
To add to what sibling commenters have said, you can also configure this extension to use a specific Neovim binary on your system, and you can also configure it to use/load the same Neovim config you use when you use Neovim in the terminal. That's what I do.
It's really the better (Neo)?vim extension in my opinion, but it has a lot less installs than the other popular extension, called just "Vim" (6.656M installs vs. 400K installs) that extension AFAIK actually emulates Vim in JavaScript, I used it for about a year in 2018, before the other extension "VSCode Neovim" was released in 2019 and remember not having a good experience using it then (to compare, the extension "Vim" was released in Nov. 2015).
I've been seeing this specific type of sock puppet spam on YouTube for years, the same exact modus operandi, my earliest distinct memory of seeing this specific kind of sock puppet spam comments promoting a financial scam is from early 2020, just before the breakout of COVID19.
The screenshots shows how they are the top comment on the video, as a result of manipulating YouTube's comment ranking algorithm, with the sock puppet comments. (note: it's a big screenshot, you might need to zoom in)
How is it that Google has not fixed this yet?
How can spam email classification in Gmail be so good while this is allowed to go on?
I believe a small finetuned LLM can easily identify this kind of sock puppeting spam.
Google should do something about this specific kind of sock puppeting comment spam that promotes financial scams and shadow ban them.
Can somebody who knows someone that works at Google ask them to raise this internally?
It infuriates me and makes my blood boil how they specifically target old people. and they intentionally target YouTube videos that old people are more likely to watch.
You should probably try what one of the few online demos of IP geolocation tell about your IP... (just to cite one among many, quality varies a lot across services and geographic zones: https://www.maxmind.com/en/locate-my-ip-address)
This makes me glad I have iCloud Private Relay turned on for all of my devices and my wife’s devices. Clicking on this link showed my location as Birmingham, Alabama, more than 1000 miles away from my actual location in northern Iowa. Several of the other IP geolocation sites others have linked in this thread showed places like Chicago (closer), and Dallas (much further).
Interesting. I have a static IP, and have kept that same IP through multiple moves around the state, but it knows my current zip code. I wonder if that is because my ISP shares the zip code, or through association with data collected from other sites.
And yet every site that uses IP geolocation for useful purposes thinks I'm in a completely different state that bounces around every few months, if I don't let the browser share my location.
I work for IPinfo.io (feel free to check your location data with us to see if we are correct as well). It is most likely that your ISP is sharing your zip code via a WHOIS/geofeed record.
For me, Firefox and without iCloud Private Relay engaged, Maxmind is within about 2km and doesn't get the city correct (but we're right on a border), and IPinfo is about 15km as the crow flies (and gets the city entirely wrong).
I thought the same. It's the first time I've ever seen IP geolocation get my home IP address correct. It usually thinks I'm in North Carolina (I'm in Florida).
It is very cheap and easy. Even the free versions of the database available from maxmind are plenty accurate for town level.
At my last job, I built a little docker image that used the free maxmind DB and kept it up to date, and ran a node server which returned some JSON telling estimated lat/long, city name, country, etc.
Cheap, easy, and generally correct for the majority of people*
Just because it’s 800km off for your IP does not mean it’s 800km off for every IP and Maxmind is generally considered one of the reliable providers of this information.
I guess the accuracy really depends on your location or ISP.
I believe my ISP rarely or never rotates IP addresses, and on top of that I think my ISP provided router is assigned an IPv6 address and it prioritizes using it, because when I visit whatismyipaddress.com with JS disabled, it can only show my IPv6 address, but if I enable JS it can show an IPv4 address too (I assume through the WebRTC IP leak method, which requires JS)
When I built the thing I mentioned, and even if i did so now, I'd just not make an AAAA record for it, because it's still safe to assume ipv4 connectivity exists (and not just via some remote proxy or something), and I think at least the database I had access to was for ipv4.
I don't think they need any hackery to get your IPv4, they just need a separate hostname configured that they can fetch from, which only has an ipv4 (A) record.
If you reach out to support and drop a correction with us, that will be quite helpful. This is an unusually high deviation, so we would like to investigate it.
200 KM means there is room for improvement. If you reach out to support and provide a correction, that will be quite helpful. If you mention that you came from HN, I can report back on why we had such a deviation.
Mine started in a different city about 520km away. And I wasn't incognito. Probably a lot more to do with your country, your ISP or coincidence than anything else.
Nice! I bookmarked it and I'm gonna start using it, thank you.
For a quick and dirty save, you can press Ctrl+P to open the print window/dialog and select "Save as PDF", or you can press Ctrl+S and save as a single HTML file.
Edit: to make the text cursor focus automatically when the page loads, you can add the autofocus attribute to the body tag.
While you can't save to localStorage as my sibling commenters have shown,
greyface- down below in the thread posted a version that saves to the hash fragment of the URI. Saving to the (Data) URI has a benefit over localStorage of allowing you to save by bookmarking, which also enables you to save many notes, not just one.
I code-golfed greyface-'s code and made the text cursor autofocus on page load:
> Webstorage is tied to an origin. 'data:' URLs have unique origins in Blink (that is, they match no other origins, not even themselves). even if we decided that 'data:' URLs should be able to access localStorage, the data wouldn't be available next time you visited the URL, as the origins wouldn't match.
Google, (the Google Chrome team), have stated before that "[JPEG XL] doesn't provide significant benefits over existing image formats" [1] and have been vocal in there disinterest in shipping support for it (they deprecated experimental support) [1]
My guess is that the newest/latest JPEG encoder developed by Google researchers, Jpegli[2], has a lot to do with this. Jpegli has been described in a Reddit comment as "a JPEG encoder that was developed by the JXL [JPEG XL] folks and the libjxl psychovisual model" and described to have superior performance to WebP (lossy WebP) [3]. that whole reddit thread has comments relevant to this discussion, specifically about the tradeoffs of supporting extra formats in browsers.
reply