Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Nvidia drivers are detecting and reporting LLaMa/LLM users (imgur.com)
80 points by dmm on April 11, 2023 | hide | past | favorite | 34 comments



Roll to disbelieve. There are only a handful of mentions of this string indexed by google, and all of them trace back to this imgur screenshot. This imgur screenshot is basically a pastebin: it has no attribution or metadata. It also misspells HKEY_LOCAL_MACHINE, by replacing underscores with spaces.

I'm sure nvidia collects telemetry, and I would not be surprised at all if that telemetry identified who's running language models and which ones they're running. But this isn't what a real notice of that fact would look like.


This seems like it might be a hoax? Can anyone independently confirm that this is actually happening?

Seems like OP just uploaded a screenshot with no additional information. I don't want to claim that they're lying, but extraordinary claims require extraordinary evidence.


It's not too farfetched, given Nvidia previously implemented a LHR GPU model lineup. I can envision them doing something similar to cripple LLMs in the near future.


> extraordinary claims require extraordinary evidence

Is this really an extraordinary claim, though? The kind of abuse being alleged is unfortunately really common today.


Does anyone know if there is there any way to prevent the Nvidia drivers from accessing the internet on Linux?

I would think that I could just add it to some no-internet group or something but I am not sure if Nvidia could bypass that given that they have a physical card on the PCIe bus.


You could pass the device into a VM that either lacks a network interface or which otherwise has no viable route.

Extrapolating a bit (regardless of whether or not this is a hoax), we may soon live in a world where GPUs won't compute certain workloads unless they're able to phone home via a connection associated with a registered identity.


Is it possible for the GPU to communicate directly with the network card? I don't know enough about this area of the stack. I can't find anything on the internet about this



That might be tough on Linux since it's a binary blob in kernel space

Presumably that level of access supersedes any kind of userspace access controls


if it's being loaded at hypervisor layer via vfio-pci, then within the KVM VM it loads the nvidia binary blob, even if at the KVM layer it has kernel space privs, at the hypervisor layer it does not.


ooof. I remember discussing how effective those drivers were compared to the open source AMD ones a few years back. Now the open AMD drivers are good and there is all more more reason to be skeptical of the binary blob.

"stallman was right"


The message in the screenshot says "HKEY LOCAL MACHINE", but shouldn't it be "HKEY_LOCAL_MACHINE", with underscores?

Maybe it's just a typo by the Nvidia engineer who wrote it, but why would they even write a notice like this? The language is a bit threatening, and doesn't seem to serve any legal purpose like a disclaimer or EULA. And why would they print it in red?

This looks like it might be a joke or a hoax. Maybe it's meant to draw attention to the near-monopoly that Nvidia holds over chips capable of running generative AI.


The image is quite bitcrushed, so the underscores are probably missing due to sampling error. Notice how on the same baseline there’s a missing period.

But it’s a fishy image regardless.


I've not seen this myself, but I find it suspicious that this is printed at all; Specifically because that means that somehow the nVidia driver has access to print to the specific console. I mean, it could be true, but this feels a little too far... off.


Does anyone have a dump of HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\LlmResearch?


The image is nearly unreadable for me on mobile.


Here's what it says:

Notice: NVIDIA has detected that you might be attempting to load LLM or generative language model weights. For research and safety, a one-time aggregation of non-personally-identifying information has been sent to NVIDIA and stored in an anonymized database. The result of this check on this system has been stored in HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\LlmResearch.

To read about NVIDIA's privacy policy, please visit https://www.nvidia.com/en-us/about-nvidia/privacy-policy/.


"safety". Who's safety?


Why would Nvidia be concerned with this?

I guess what I'm asking is what are Llm's used for? I googled that it was Large language model but I don't know what that means.


Its what things like Bard and ChatGPT and BLOOM are all based off of. You need GPUs to load the weights into memory of to run them (typically).


Thanks.

Why would Nvidia want to prevent someone from doing it though (I guess they have a competing product?), and why should any consumer accept being prevented from doing it on hardware they've purchased? Is that even legal?


To get a cut of the compute-cake. See: Nvidia LHR GPUs


Nvidia! Fuck you!


Any other users confirmed yet?


Is this surprising? NVIDIA drivers have been reporting what games you open, what programs load GPU drivers, etc for decades now.


For decades now? plural? 2003 they were doin that?


I'm on latest drivers and don't see that registry... Whats the software triggering it?


Go to torrents, get the leak from FB of LLAMA. Scan HN for plenty of entries of GitHub "how to" with said LLAMA. Follow steps there and load the model. See if that's the trigger or this is a hoax.


Multiple users are reporting that Nvidia is uploading telemetry when they load LLM weights.


Where are users reporting this?



I am not very familiar with Nvidia. What does telemetry mean in this context/what are the implications?


Source? Where are these users?


Are there any steps anywhere on how to reproduce this? Otherwise I’d assume it’s a hoax




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: