Hacker News new | comments | show | ask | jobs | submit login
Nvidia’s New Policy Limits GeForce Data Center Usage (wirelesswire.jp)
242 points by gone35 6 months ago | hide | past | web | favorite | 145 comments



Developers should be mad about this. Here's where to start contributing to open alternatives:

ROCm: https://rocm.github.io/

OpenCL: https://www.khronos.org/opencl/

TensorFlow: https://github.com/tensorflow/tensorflow

Nouveau: https://nouveau.freedesktop.org/wiki/


TensorFlow depends exclusively (st time of writing) on CUDA for GPU acceleration. Leaning on them to support OpenCL fully would unlock the world of AMD GPUs that are move affordable and don’t have the same crazy EULA. I regret having bought GeForce cards for machine learning now...


Actually, AMD has a version of tensorflow that works on their devices:

https://github.com/ROCmSoftwarePlatform/hiptensorflow


TensorFlow also supports Intel Xeon Phi as an alternative

https://software.intel.com/en-us/articles/tensorflow-optimiz...


afaik Xeon phi doesn't come close to the performance per buck of gpus in deep learning


Instead of “leaning on them”, you can submit patches. The OpenCL effort is well underway, and pull requests are accepted every day.


>Instead of “leaning on them”, you can submit patches

Do yo know that he has the right skill set or the time to maintain the code for years to come? Do you know that the effort would be economically viable for him like it could be for AMD or perhaps Google?

If not then I would prefer if he didn't submit a patch, because he would be wasting everyone's time.

Sometimes the best contribution you can make to an open source project is to express your interest in particular features so that those who really can submit a patch know that there is demand for it.


For the longest time they have encouraged people to use Geforce cards for doing machine learning. Now they are changing the licensing and making it not okay for data centers is not gonna sit well with people. Especially with the carveout for Bitcoin miners. The number of customers who are impacted like the article points out will be mainly universities. This is bad move overall and isn't gonna gain them any good will with the community. Like seriously how many companies are out there are putting out Geforce cards into data centers? The big data centers from cloud computing platforms all use Tesla cards.


The exemption makes it even more ridiculous, not only do they want to restrict where you can use the software but they also want to have a say in what kind of applications you can use their software for.

This besides the fact that Bitcoin on GPUs is fairly dead, it's almost entirely done on ASICs now.

If you read the license agreement it states that the term applies to 'blockchain processing', not to bitcoin specifically.


Indeed.

There are a number of "alt-coins" (blockchain-based cryptocurrencies other than bitcoin) designed to be "ASIC-resistant" e.g. by using the Lyra2REv2 algorithm.*

This is notably the case of Monero and Monacoin, two popular cryptos in Japan. Generally, the Japanese and Koreans are very enthusiastic about cryptocurrencies in general.

Cryptocurrency GPU-mining on Tesla cards would be outright unprofitable.

These facts may contribute to Nvidia's decision not to displease Japanese GPU miners specifically.

____

* This is to avoid big ASIC farms as is the case of bitcoin (apparently, more than 70% of bitcoin's hash rate comes from China, most notably from a specific valley where electricity is dirt cheap). One purpose of ASIC-resistance is to maintain a high-enough degree of decentralization in mining, which increases security for the network/blockchain.


$20,000 btc can make mining on gpu's potentially profitable again, more so as the price goes up. If the price holds the 6 months or a year it takes to mine one is another matter entirely :)


I thought not - you have to win the race to confirm the block and that means thz of hashes... which is unrealistic on a GPU cluster?


You'll have to wait for a few hours until it is back at that level again.


The fact that "humps" exist implies that quite a few people are putting GeForces in servers and not talking about it. https://www.servethehome.com/avert-your-eyes-from-the-server...


Plenty of people. You can find tons of articles on GeForce 1080 Ti based learning box builds.


Here's a figure on the scale-out you can get on the DeepLearning11 server (cost $15k) - it's about 60-80% of the performance of a DGX-1 for 1/10th of the price (for deep ConvNets and image classification, at least).

https://www.oreilly.com/ideas/distributed-tensorflow


I think this is because they want people to use Titan V workstations. They know that the Titan V will cannibalize Tesla sales of they let people put them in data centers. It's still a shady move, but it makes sense.


shady... shader... bad pun.


In good news, this may finally force a court case around EULA's, which basically everyone have known for a decade as bullshit.

Having to agree to draconian terms to use something you own is ridiculous. If EULA's become enforceable there's nothing preventing a toaster maker from banning you from toasting bagels in your machine and countless other absurd usurious machinations. EULA 'shrinkwrap' license agreements are a scourge on the free market and should be banned unilaterally


Agreed, however this depends heavily on jurisdiction. E.g. in Germany [0], license agreements presented after purchase are completely void anyway (although defining the moment of purchase for non-physical goods is tricky - not so for a graphics card though). So in Japan, maybe?

[0] https://de.wikipedia.org/wiki/Endbenutzer-Lizenzvertrag


Well, Geforce GPUs are advertised more heavily for graphics than computing. If I were nVidia I'd try to argue that CUDA and other GPU computing libraries are separate products not purchased with this line of GPUs.


If you don't pay for them and need a product is it really a purchase?


The thing with EULAs is that you do not own the product. You are licensed to use the product under the company’s draconian terms. Agree that there needs to be new consumer laws protecting consumers.

A contract used in the old times 1800s to be one to two pages of simple English that both parties negotiated the agreed terms on. Now EULas almost requires a degree in law to understand yet you abide to the terms under civil liability. Further more Eula’s are so long most people blind sign without reading and understanding the terms.

There is also questions whether terms in Eula’s are legal such as limiting free speech about the product. In many countries free speech is a right regulated as basic first amendment laws.

Can you limit free speech?


> In many countries free speech is a right regulated as basic first amendment laws.

That's definitely not true, please realise the whole free speech argument is useless outside the US. Anyway, I think you'll find it hard to get people interested with such an argument, even if it is an interesting way to put it.

But your first point - the argument about owning a product - is easy to understand and much more persuasive. That's why the John Deere DMCA thing got so much press, and also why I'm convinced that the downfall of the EULA will be because somebody tried to limit ownership of a physical product (and not software, which may or may not reap the benefits, but won't be the catalyst).


> can you limit free Speech?

God, this is getting out of hand...

Yes, of course you can. Confidentiality agreements, kn-disparagement clauses in settlement... it happens all the time.

Not to mention that the first amendment limits government censorship. Private parties are, as a first approximation, free to agree to whatever terms they like.

There’s also fraud, obscenity, and defamation laws. All limiting free speech. And, of course, using graphic cards has nothing to do with free speech.


That’s an argument for software licenses. My NVIDIA GPU on the other hand is hardware I own.


But is the hardware even any good without the NVIDIA firmware baked inside?


Or, is adding some firmware/software enough to impose shitty licensing on de facto hardware products, like printer cartridges or Keurig cup things [0]?

If not, where do we draw the line? Luckily, it seems that people hate this kind of thing.

[0] https://www.theverge.com/2015/2/5/7986327/keurigs-attempt-to...


Good luck using it to its fullest potential without their drivers.


The EULA is for software (=the driver)


"Can you limit free speech?"

Yes, in every country on Earth, for very obvious reasons

Ever signed a non-disclosure agreement?


I'm confused - does the EULA prevent use within one's own data center? Or is it just that a cloud computing service such as AWS or Azure cannot offer them? Because if it's the latter then the article doesn't hold nearly as much weight imo.


This is the relevant change in the EULA you need to accept when downloading (installing?) the GeForce driver. The article is weirdly translated, this is apparently a global NVIDIA thing and not specific to "NVIDIA Japan".

> No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.

Edit: http://www.nvidia.com/content/DriverDownload-March2009/licen... and https://www.geforce.com/drivers/license/geforce


I'm not a legal expert and don't understand EULAs at all. I expected a "Datacenter" to be defined at some point but I didn't see a definition in the EULA.

How should one interpret this? Is a research cluster a datacenter? How about a rack of a few dozen machines I built by hand? I've been in university research groups that had both options.

Somewhat maddening to have a single line in the EULA that's so open to interpretation.


IANAL, but in the law there's a thing which is called Contra proferentem: https://en.wikipedia.org/wiki/Contra_proferentem Basically, it means that ambiguous terms are interpreted against the contract drafter.


You get that pretty much everywhere that used to be British common law iirc, and it’s why contracts frequently define seemingly minor and silly terms, ones which are obvious such as “the undersigned“ and all manner of the seemingly banal. In essence the contract is first and foremost agreeing on the same language, to express a commitment which ideally all parties are fully cognizant of.

Edit: spelling


Also there is no punitive damages in U.K. law. So what’s the worst that could happen? Would you put them back in the position they would have been had the contract not existed, so give the cards back plus a contribution towards depreciation? Would be interesting if someone with legal knowledge could chime in.


They could potentially claim damages for the difference in price between the GeForce and the equivalent Tesla, using the argument that using the GeForce in a DC has cost them that amount in lost revenue.

Of course, a counterclaim for existing owners would be that the product they purchased was licensed for DC usage at time the contract was entered into, and so the original rights cannot be unilaterally revoked without consent.


They don't want commercial cloud and bare metal and "machine learning as a service" platforms to be based on GeForce cards. Such a service is either insignificant in it's size, or based in a Datacenter.

I would not stress about this on my own hardware for my personal or company internal use, even if it is a full rack or more in a Datacenter. Nvidia is not going to be able to tell the difference to a rack in my basement.


Undefined terms like that speak to the weakness of the EULA as an enforceable contract.


When you're not a legal expert and you feel like you are about to maybe break a term in someone else's license agreement the right avenue is to consult your company or university legal department.


What if you write your own block chain where the POW is "do a deep learning task"?


This is the concept behind openmined.org - the “miners” are rewarded for improving the gradient (the data is anonymised).


courts aren't (yet?) ruled by machines; you'll have a hard time arguing this to an actual human being


And an actual human judge is not going to sympathize with a have your cake and eat it too EULA like nvidias especially if the competitor is offering a product that is more efficient for the consumer


I might be wrong, but doesn't modern Windows just silently installs these from windows update, without EULA or user consent?


If this is just a software licensing change maybe the OSS stuff will gain momentum!


If true then Nvidia has gone nuts. There is no way that they can control what end users of their products do with their products and/or their software post download to the extent where they are going to have a say in what locality you can use their software and where not.

Utterly ridiculous. For a company that has seen my solid support so far I'm extremely disappointed.

Could someone fluent in Japanese please confirm the claim in the title?


It's true, here is the updated license: http://www.nvidia.com/content/DriverDownload-March2009/licen....

Text:

  No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment,
  except that blockchain processing in a datacenter is permitted.


So, once again proof that open source is the only way to go.


Open-source drivers don't even support Tesla, not to mention Volta. An EULA challenge seems to the way to go here. I'll be emailing legal for advice once I get back to work.


I think the point is less using nouveau drivers for nvidia hardware as it is we should have never accepted CUDA as a standard since the implementation isn't free. Proprietary standards leave us at the mercy of the owners.


There's no date on the page itself, and the link contains the string "March2009". Is this really a recent change?

EDIT: just tried to download a driver, and this is the page that gets displayed.


Back in the 90's Borland got many customers pissed off, as they including a clause on their EULA preventing them to use Borland tools to write compilers.

It was quickly removed, given the backslash.


What’s interesting to me is seeing a physical tool company enclosing a “EULA” on something like a router but, saying you can’t use their physical bit to make other tools.


This seems no different than Tesla prohibiting use of AutoPilot for gainful ridesharing.


A large amount of computing equipment is sold with the restriction that it not be used to control a nuclear power plant.


Did you just make that up?

Safety critical systems integrators only buy computing equipment explicitly designed to be safety critical. That's a requirement from the system designer (or possibly the law), not the other way around.

If a nuclear plant designer go npm install random package that doesn't explicitly say it's not designed for nuclear plants no amount of lawyers will be able to sue the package maintainer if it fails.


> Did you just make that up?

You could have just googled "nuclear power EULA" to see how common this is, instead of throwing around accusations.


I mean, it's actually very common

To underscore this, go look at the Java license agreement, which forbids its use in any life critical environment

You can't even use Java to raise and lower the wooden arm in front of a toll bridge


Yes we all know EULA's contain a lot of moronic crap, just because Company A includes moronic things in their EULA should not be an excuse for Company B to include equally moronic things in theirs.


They already were artificially limiting virtualization on GeForce cards and were always pointing to their data center cards when asked about it. It goes as far as the physical location of the power connectors on the top of the card so it doesn't fit in a server/rack case. I imagine it makes sense from their point of view, the data center cards like tesla are ridiculously more expensive than the consumer cards. NVIDIA was never really a consumer friendly company, instead of cooperating on open standards they always went their own proprietary way. Thats the obvious danger when a company becomes the de-facto standard in some area, they exploit their position for profit. That can't possibly be still a surprise to anyone.


This is just a blatant attempt to sell the exact same product to two different types of customers at different prices based on their (presumed) willingness to spend.

Actually it's worse. It's an attempt to cripple a perfectly functioning cheap product aimed at a different audience (gamers) for another audience (machine learning folks) in order to drive them to more expensive products that do the same thing.

Pure greed.


You’re going be surprised how CPUs are price segmented in that case (they’re cut from the same wafers and priced by the clock frequency QA validates at).

Price semgnentation isn’t necessarily bad; you want to segment your product in order to maximize revenue across all possible customers; this is just a more egregious/bold attempt at doing so (and is hopefully prevented by a court).


Binning CPUs on clock frequency or number of functioning cores is a bit different. If you compared it to a precious metal, the higher price CPUs have fewer imperfections, therefore they cost more. The fact that they carry different product numbers is a bit misleading but the price segmentation actually makes sense.


The binning is a bit arbitrary, hence overclocking. A large proportion of parts that are binned to a certain clock speed will perform perfectly well at higher clocks. Bins are decided and priced based mostly on marketing rather than fab yields.

Intel disable a lot of features on low-end chips, which has nothing to do with yield management. There's no technical reason to disable vPro and Turbo Boost on Core i3 chips.


We have both Xeon and i7 workstations in my office and functionally there is no difference, except that the Xeon ones cost 2-3x more. ECC support is not exactly worth it for a normal workstation use.


NVIDIA already does some form of segmentation with Quadro cards (i.e., artificially crippling things in driver/firmware). They have complete control over firmware, drivers and CUDA. More than the EULA, it's likely that they'll differentiate in software. For eg., CuDNN may stop working on GeForce, or some rubbish like that.


Sure, but CPU makers don't tell you what software you can run or in what context.


I don't think it's an attempt to cripple the use of their gaming cards so much as it is to cripple the use of their new Titan V.


While I agree it's likely motivated by greed, there is a legitimate reason too: GeForce cards simply aren't made to be run 24/7 in high density server environments. Put 200 of them in a rack and have them running inference or training 24/7 and I suspect you'll see them drop like flies. Tesla cards are explicitly designed for endurance.


You are 100% wrong. When that’s the concern, companies add warning labels or warranty limitations.

Whether the GeForce cards are less reliable in that environment or not, that is not why the EULA was changed in this way.


I don't think you read my comment.

I said that they're likely motivated by greed but there _exists_ a legitimate reason too. I didn't say that _is_ their reason, just that it exists.


We have been running a DeepLearning11 server for a few months now without any problem. I suspect its FUD from Nvidia. If you have fans in your data center actively cooling the cards, you'll be ok. You should use the 1080Ti Founders Edition or Blower card, as they have a single fan that blows air out of the box (in the DL11 setup).


If your assessment of GeForce reliability is correct, then there's no need for the clause in the EULA at all: people will try to use them in a datacenter, they will fail, and people will use the DC-oriented cards instead.

But that's not the case. Many people are finding they can use GeForce cards in a datacenter just fine, and it's cheaper than buying the DC-oriented cards. This is a money grab, pure and simple.

Not to mention that the EULA has a carve-out that says blockchain processing is permitted. So nvidia is even acknowledging that there's no hardware reliability issue.


consider the warranty


If your assessment of GeForce reliability is correct, then there's no need for the clause in the EULA at all: people will try to use them in a datacenter, they will fail, and people will use the DC-oriented cards instead.

But that's not the case. Many people are finding they can use GeForce cards in a datacenter just fine, and it's cheaper than buying the DC-oriented cards. This is a money grab, pure and simple.


I read it, and I’m saying there does not exist any legitimate reason to add that language to the EULA. There’s no benefit it provides to addressing the problem of supposed realiability. It’s just almost never the way companies address such issues.


Do you imply that GeForce cards have DC failure scenarios for specific computations only? The clause "... but it is ok to put them in DCs if you wanna mine some coins" goes against your argument.


This is problem I have with Driver EULA's. If I buy Hardware today under terms X, should the manufacturer of the hardware be allowed to unilaterally change the terms in order for me to continue to use that hardware securely and effectively, which would require updates to fix bugs and vulnerabilities created by the manufacturer.

Basically if I buy a GPU, discover a bug in the product, can nvidia hold that bug fix hostage and refuse to fix the product I bought under 1 set of terms until I agree to a new set of terms

At a minimum I believe the manufacturer should be required to refund, full original purchase price, of any product which they change the terms to anyone that objects to the new terms even if the new terms come out YEARS after the product was sold and well outside any warranty period


More interestingly: the targeted datacenters have installed drivers with EULA that didn't include this clause. So if they don't upgrade the drivers, they should be OK?


Japanese original:

https://wirelesswire.jp/2017/12/62658/

--

Edit:

Apparently it's just that GeForce cards have(/had?) no warranty for use in "data centers" --but academic use is not precluded in of itself:

https://twitter.com/NVIDIAAIJP/status/943141204744585222

https://wirelesswire.jp/2017/12/62667/

Still, the line is somewhat unclear, I wonder how many university/academic cluster admins are aware of the fact..


Datacenter conditions for GPU clocks/power/air tend to be gentler than desktop conditions, so the warranty just disappearing is far from justified.

Edit: Ha, ridiculously unjustified if bitcoin mining is still covered.


So if a data center is using these cards, and is eventually forced to install a newer driver, they are now in violation?

That seems a bit insane to have used a product that then ammends its own EULA to prevent you from using it entirely...


The simple solution is to have some simple software that processes a blockchain in some low priority way in order to easily bring you back into a licensed state.


If they manage to stick this - which is a big if - wouldn't that just cause a major pivot of interest to ML on AMD? Sure, OpenCL is very much the poor cousin right now, but an 8x cost hike is a pretty strong incentive to look around for your Best Alternative Elsewhere.


Realistically, this change is most likely aimed at "forcing" the hand of very large companies to co-operate with using Tesla instead of GeForce-based stuff.

Think Amazon, Large Supercomputers, etc. They probably won't care or chase the smaller folks (be careful if you get too big though :-)

I wonder if these terms are also on the Linux downloads?


In the case of Amazon, who is responsible? Amazon might provide the hardware to their customer via an EC2 instance, but it's the end customer who downloads, installs, and uses the driver. The EULA restriction is on using the driver, not the hardware.


This is true, but (1) Amazon likely has special agreements with NVIDIA for all sorts of reasons, if they did this kind of thing against NVIDIAs wishes they'd just push back with those agreements (e.g. pricing on proper GPUs, etc) (2) Many big customers won't like it

This kind of behavior works best for smaller players, who are the ones they don't really care about either.


I wonder how this can be enforced?

How will nvidia know how its users are using the GPUs? How will they know if a GPU sits in a datacenter, or in a desktop with xeon CPUs?


Curious coincidence... I was thinking of something similar earlier today.

What is physical object had EULA licenses like, software ?

Something more-or-less serious, like "you are not allowed to use this knife to eat meat", or "you are not allowed to use this spoon to eat jelly" or even "it is forbidden to publish a benchmark of this car" (related to an article that passed here on hn like a week ago).

Would such EULA licenses be enforceable ? And so, assuming companies would be allowed to enforce clearly ridiculous terms, what does this tell us about software licenses ?


I would compare it to: you can't use this tv to watch Fox news because Samsung does not like the LG ads they run.


Physical objects are sold. Software (GPU drivers) is subject of copyright law — it's licensed.


Many (most?) new high-end consumer products run on software now. Your car, your fridge, your washing machine, your thermostat, etc. You do own the product, but they could make you "activate" the software license using the web, with an included EULA to click through. I suspect we will see some abuse of this in the future.


And here it is again: https://xkcd.com/743/

"It's the world's tiniest open-source violin."

As much as I disagree with most of RMS's politics, the man has been proved right time and time again with respect to the dangers of building on proprietary software.

This is just the latest in a long, long line of outrages.

When will we as a development community quit this weird Stockholm-syndrome-like relationship with proprietary systems vendors?

Build on NVidia, build on iOS, build on OSX, build on Windows, that's fine. It's your call. But don't act surprised when you discover the real nature of the extant power relationship.


> When will we as a development community quit this weird Stockholm-syndrome-like relationship with proprietary systems vendors?

When most companies stop treating open source as a way to decrease their development costs, or as a legal way of doing piracy.

I was big into open source for a decade, but it hardly payed most bills.

Only software that requires consulting, training, or can be hidden behind a server wall is profitable as open source.

And since the supermarket lady and my landlord don't take pull requests, I build for systems vendors.


I think you may have missed my point (which in turn implies I didn't explain it well ;) ).

What I meant was: if you're building a system for your own use (including things like ML platforms and SaaS sites) why pick a target platform that is known hostile?


Did they do a find/replace of “NVidia” to “NVidia Japan”? It is strange to attribute everything to the subsidiary.


The original Japanese posted elsewhere does not make the distinction; it's just NVIDIA. So, it's a translation thing?


I can see how you might try to limit warranty claims under unapproved operating conditions (like 24/7 100% duty cycle) but this is a bridge too far.


From a sales point of view, Nvidia actually doesn't care that much about researchers, they make up a small percentage of sales. They care about gamers (VR has been a big help), and especially cryptocurrency mining where customers will buy multiple video cards at a time (kaching). There are even blockchains now that are resistant to ASIC mining, where GPU mining is the best again. (BTG for example)

I suspect that from Nvidia's viewpoint, cloud providers are a middleman and make GPU usage more efficient, which threatens sales. Why should I, ask a cryptocurrency miner, buy 4 new video cards when I can rent unused time that has already been paid for by someone else at the data center? (if the cost is cheaper, which sometimes it is)

Though I question how they intend to enforce this new provision of theirs given that GPU cloud usage is already widespread.


The inverse I think might be true. If the GPUs are much more accessible on the cloud, deployment and placement problems handled, then more developers will be comfortable building around that. Otherwise, it's a massive hassle to have physical machines with specialized hardware.


I don't know if this article is correct, sans a legal/safety reason or a warranty void. Doesn't seem like something Nvidia could legally prevent you from doing.

That said, I use GeForce GPUs in production datacenters and this wouldn't stop me. I put a dummy load on the display output and use them to take screenshots of web sites with WebGL for Neocities. It's the only way to really do it. The Tesla blowtorches are just for ML crunching, I don't think they work for what I use datacenter GPUs for.

Also my datacenter power connection I get would trip under the Tesla's power requirements, to say nothing of the costs. Might as well plug a Tesla car in while I'm at it.


I thought the NVIDIA drivers for the more fancy cards (TITAN etc) are the same as for the gforce cards. Wouldn't this restriction apply to those cards as well? Doesn't make much sense to me...


They can't price-differentiate FP64 compute out, since ML uses FP32 or even FP16. They tried discriminating FP16 performance but frameworks switched to using FP32 units and downconverting to FP16 after. They can't kill FP32 performance since that's used for gaming. They tried killing the virtualization, they tried differentiating based on clustering, they tried every reasonable technical procedure. So now they're falling back to legal means, to defend an artificial price distinction that has no reflection in card features that anyone cares about.


In the future, Teslas will have Tensor Cores and GeForces won't, so deep learning will be much faster (but also much more expensive, so it kind of cancels out) on Tesla cards.


Yeah, doesn't it seem a little weird that they seem to want to enforce the distinction between gaming and industrial cards, but they also went ahead and included tensor cores in that new Titan card?

Did they not think that through and have an 'oh shit' moment when they saw all the news articles or something? Or is this a 'the first hit is free' sort of deal where they want people to learn about using the product on a startup budget, without giving up the ability to squeeze if someone has a good idea and wants to scale?


Titan V is so expensive that it's not cannibalizing anything. Once the GeForce 1180 comes out... they'll make it just slow/expensive enough that it still doesn't cannibalize anything.


They did take the "GeForce" label off of the new Titan. And the 1180 might not have any tensor cores at all.


Are you sure that they won't do it? Did they mention it somehow?

AFAIU, tensor cores are just super efficient matrix operations, and they might super useful in gaming applications, for example, physics engines.


They have just released the new Titan V, which has Tensor Cores I believe. That would indicate that they do want to include them in non workstation/dedicated ML cards, no?


Except that they dropped the GeForce branding from the Titan V, and appear to be targeting the card for compute at developers/researchers [0].

[0] https://nvidianews.nvidia.com/news/nvidia-titan-v-transforms...


The Titan V is a compute card, the same way the Titan XP and Titan Xs were. Differing factors between them and the xx80[Ti] of the corresponding generation were memory bandwidth and capacity - gaming performance was nearly identical to the geforce. Titans are entry-level compute.


That is probably the point.

They want to sell two fungible products while enforcing a pricing hurdle so that certain types of customers have to pay much more for the same performance.


This should be a rallying cry to the developer community to stop using software with restrictive licenses.

https://m.youtube.com/watch?v=9sJUDx7iEJw

“Join us now and share the software!”

In all seriousness anybody using CUDA should immediately start checking out and contributing to ROCm, AMD’s Open Source Compute Platform:

https://github.com/RadeonOpenCompute/ROCm


While I agree with you, saying things like "dirty software hoarders" kind of kills your message.


Hell ya, more reason for people to switch to AMD.


One of the benefits of the AMD-ATI acquisition is that they can weather GPU shortage/oversupply more easily. I assume that NVIDIA is worried about an oversupply of GeForce GPUs when the mining boom ends, right?


Was wondering how long until they pull off something like this. I've seen startups (and some established hosting companies) offering cloud and bare metal GPU with GeForce cards - which are far cheaper than the official datacentre cards Nvidia sell.

On the positive side, hopefully this will mean a consumer GeForce Tesla soon, now that they think it won't cannibalise their datacentre GPU sales.

I really wish ROCm worked seamlessly and they had some competition.


I wouldn't call a uni's server room a "datacenter". NVIDIA should clarify that it doesn't apply to experimental researchers.


I really can't understand how could this license change not mentioned at all previously in Non-Japan countries.

This issue has been widely known because a famous[citation needed] Japanese entrepreneur accuse NVIDIA for the recent silent license change.


I can see the reason they are making this move is because the supply of GeForce Video cards is being used up for things other than Graphics duty like mining block chains or AI training. It’s probably a desperate supply move


Unless we're talking about the Tesla cards, nVidia is about the worst of the GPU class as far as mining goes. Nobody uses it aside from people who are exceedingly frugal to their own detriment. There is a reason why their prices aren't scaling linearly with the mining boom like AMD, because they are not as good as AMD in most cases.


They specified that this doesn't apply to blockchain mining.


While I think it's probably part of the reason, if it was the only reason there are bound to be other less customer hostile ways of solving that particular problem.


They can't price-differentiate FP64 compute out, since ML uses FP32 or even FP16. They tried discriminating FP16 performance but frameworks switched to using FP32 units and downconverting to FP16 after. They can't kill FP32 performance since that's used for gaming. They tried killing the virtualization, they tried differentiating based on clustering, they tried every reasonable technical procedure.

This seems consumer-hostile because the entire thing is consumer-hostile - they want ML researchers to pay more for graphics cards because they have more money to spend, not because they can offer superior performance. (They can offer superior performance to GeForce, just not superior performance/$ or performance/W.)


You cannot just look at that, the Tesla’s have bigger memory bandwidth and bigger memory which can help to utilize the GPUs better by having faster access to more data


If you're in the specific range where your problem doesn't fit in a GeForce and does fit in a Tesla, then it can be great.

For a very large number of problems that are smaller than both or bigger than both, that extra memory bandwidth is a lot smaller than the price difference.


This EULA change are not acceptable. For example: If a game developer creates gameplay video for customers at the data center it will be the violation of the EULA.


So... fuck the license agreement. They can't sue everyone.


It's so weird to see the responses in this thread of people saying they're going to ask their legal department. Have fun when your self-justifying legal department turns around and dictates the company can only buy the more expensive cards, they now want to review the legalese bombs for everything else, etc.

Just ignore the wordbarfs as the UPPER CASE INCOHERENT SHOUTING THEY ARE. Everybody knows that nobody reads "EULA"s, and a meeting of minds is a necessary requirement for there to even be a contract. These fantasy "terms" literally only exist to the extent we acknowledge and dwell on them!


This is more about making a statement of defiance against openly monopolistic and abusive terms, about showing companies that they do not get to twist around the market like this. Show them that customers are in charge, and that companies are mere stewards


Also, in the US at least, doctrine of first sale means NVIDIA's power in this space should be close to nothing.


Until you need to update the drivers and agree to the restrictive EULA.


Not sure how courts see the issue now (and it may vary by district) but last I checked in some cases an EULA is insufficient


Does this apply to just the GeForce app, or the drivers themselves? If it’s just the app you can always manually install drivers.

And how does it affect Linux kernel drivers?


Good luck enforcing this nVidia.


What’s the tl/dr on this?


So, it's just of matter of reverse engineering the driver?


Or just using it despite the EULA? It's not like the driver knows where the server is physically located.


> It's not like the driver knows where the server is physically located.

Well, it probably currently doesn't guess the server location, but it doesn't seem impossible or even hard if they think it's useful to them.


When does a bunch of servers become a datacenter though? When a university group runs their own servers that only the group has access to, is it a datacenter? If I buy 2 servers and put it in my basement, is it a datacenter? What if its only one server?


Typical HN kneejerk comments.

The GeForce series cards are designed, spec'd, priced for consumer usage. Nvidia has to warranty and guarantee what they sell, consumer confidence is important.

When you're doing ML or blockchain Proof-of-Work the card becomes a consumable. Is it fair for manufacturers to guarantee them like they would a consumer desktop or gaming GPU? You expect people who play with ML in their free time to subsidize your GPU cluster?

That said, hope this gets people angry enough to make TF support OpenCL. CUDA is too pervasive.


They want people to buy their more expensive card for ML stuff. The silicon probably isn't all that different.


Not all chips are born equal and that's why faulty/poor performance cores and components are deactivated and chips sold in tiers. Look into it, same silicon on paper, but not on silicon. The problem comes from those specs and the warranties, guarantees, consumer protection, etc.

So decide, overbuilt GPUs (more expensive), firmware gimped GPUs (less performant), or non warrantied( risky ).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: