Not open source. Even if we accept model weights as source code, which is highly dubious, this clearly violates clauses 5 and 6 of the Open Source Definition. It discriminates between users (clause 5) by refusing to grant any rights to users in the European Union, and it discriminates between uses (clause 6) by requiring agreement to an Acceptable Use Policy.
EDIT: The HN title was changed, which previously made the claim. But as HN user swyx pointed out, Tencent is also claiming this is open source, e.g.: "The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry".
How's that NYTimes vs OpenAI lawsuit going? Last I can find is things are hung up in discovery: OpenAI has requested potentially a century of NYTimes reporters' notes.
> The AI company asked Judge Sidney H. Stein of the US District Court for the Southern District of New York to step in and compel the Times to produce reporters’ notes, interview memos, and other materials for each of the roughly 10 million contested articles the publication alleges were illegally plugged into the company’s AI models. OpenAI said it needs the material to suss out the copyrightability of the articles. The Times quickly fired back, calling the request absurd.
Can any lawyer on here defend OpenAI's request? Or is the article not characterizing it well in the quote?
Model weights could be treated the same way phone books, encyclopedias, and other collections of data are treated. The copyright is over the collection itself, even if the individual items are not copyrightable.
> Encyclopedias are copyrightable. Phone books are not.
It depends on the jurisdiction. The US Supreme Court ruled that phone books are not copyrightable in the 1991 case Feist Publications, Inc., v. Rural Telephone Service Co.. However, that is not the law in the UK, which generally follows the 1900 House of Lords decision Walter v Lane that found that mere "sweat of the brow" is enough to establish copyright – that case upheld a publisher's copyright on a book of speeches by politicians, purely on the grounds of the human effort involved in transcribing them.
Furthermore, under its 1996 Database Directive, the EU introduced the sui generis database right, which is a legally distinct form of intellectual property from copyright, but with many of the same features, protecting mere aggregations of information, including phone directories. The UK has retained this after Brexit. However, EU directives give member states discretion over the precise legal mechanism of their implementation, and the UK used that discretion to make database rights a subset of copyright – so, while in EU law they are a technically distinct type of IP from copyright, under UK law they are an application of copyright. EU law only requires database rights to have a term of 15 years.
Do not be surprised if in the next couple of years the EU comes out with a "AI Model Weights Directive" establishing a "sui generis AI model weights right". And I'm sure US Congress will be interested in following suit. I expect OpenAI / Meta / Google / Microsoft / etc will be lobbying for them to do so.
Encyclopedias may be collections of facts, but the writing is generally creative. Phone books are literally just facts. AI models are literally just facts.
Are they, or are they collections of probabilities? If they are probabilities, and those probabilities change from model to model, that seems like they might be copywritable.
If Google, OpenAI, Facebook, and Anthropic each train a model from scratch on an identical training corpus, they would wind up with four different models that had four differing sets of weights, because they digest and process the same input corpus differently.
That indicates to me that they are not a collection of facts.
The AI training algorithms are deterministic given the same dataset, same model architecture, and same set of hyperparameters. The main reasons the models would not be identical is due to differing random seeds and precision issues. The differences would not be due to any creative decisions.
What if I train an AI model on exactly one copyrighted work and all it does it spit that work back out?
eg if I upload Marvels_Avengers.mkv.onnx and it reliably reproduces the original (after all, it's just a fact that the first byte of the original file is OxF0, etc)
A work that is “substantially similar” to a copyrighted work infringes that work, under US law, no matter how it was produced. (Note: Some exceptions apply and you have to read a lot of cases to get an idea of what courts find “substantially similar” .)
Who gives a damn about copyright when this is clearly profiting off of someone else's work without compensation? Sometimes the law is inadequate and that's ok—the law just needs to change.
The title of Tencent's paper [0] as well as their homepage for the model [1] each use the term "Open-Source" in the title, so I think they are making the claim.
Most likely yes. I don't think companies can be blamed for not wanting to subject themselves to EU regulations or uncertainty.
Edit: Also, if you don't want to follow or deal with EU law, you don't do business in the EU. People here regularly say if you do business in a country, you have to follow its laws. The opposite also applies.
1. No one is training on users' bank details, but if you're training on the whole Internet, it's hard to be sure if you've filtered out all PII, or even who is in there.
2. This isn't happening because no one has time for more time-wasting lawsuits.
> No one is training on users' bank details, but if you're training on the whole Internet
Tencent has access to more than just bank accounts.
In the West there's Meta that this year opted everyone in their platform into training their AI.
> This isn't happening because no one has time for more time-wasting lawsuits.
No, this isn't happening because a) their training data is, without fail, trained on material they shouldn't have willy-nilly access to and b) because they want to pretend to be open source without being opensource
Doesn't that mean if they used data created by, (or even the data of), anyone in the EU, that they would want to not release that model in the EU?
This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
Which, I mean, I can kind of see why US and Chinese companies prefer to just not release their models in the EU. How could a company ever make a guarantee satisfying those requirements? It would take a massive filtering effort.
This seems to mirror the situation where US financial regulations (FATCA) are seen as such a hassle to deal with for foreign financial institutions that they'd prefer to just not accept US citizens as customers.
> > This sounds like "if an EU citizen created, or has data referenced, in any piece of the data you trained from then..."
> Yes, and that should be the default for any citizen of any country in the world.
This is a completely untenable policy. Each and every piece of data in the world can be traced to one or more citizens of some country. Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
I think that’s kinda the point that is being made.
Technolgy-wise, it is clearly feasible to aggregate the data to train an LLM and to release a product on that.
It seems that some would argue that was never legally a feasible thing to do, based on the training data being impossible to use legally. So, it is the existence of many of these LLMs that is (legally) untenable.
Whether valid or not the point may be mute because, like Uber, if the laws actually do forbid this use, they will change as necessary to accommodate the new technology. Too many “average voters” like using things such as ChatGPT and it’s not a hill politicians will be willing to die on.
> Actively getting permission for every item is not feasible for any company, no matter the scale of the company.
There's a huge amount of data that:
- isn't personal data
- isn't copyrighted
- isn't otherwise protected
You could argue if that is enough data, but neither you nor corporations argue that. You just go for "every single scrap of data on the planet must be made accessible to supranational trillion-dollar corporations, without limits, now and forever"
In Meta's case, the problem is that they had been given the go-ahead by the EU to train on certain data, and then after starting training, the EU changed its mind and told them to stop.
Hmm, in fairness I don't see where Tencent is claiming this is open source (at least in this repo; I haven't checked elsewhere). The title of the HN post does make the claim, and that may be controversial or simply incorrect.
The term "open source" had no significant use to refer to software before the Open Source Initiative started promoting it. Previously, it was only intelligence industry jargon, meaning "publicly available information", which includes software that fails your "can read the source code" test. "Source" was used in the journalistic sense, not as in "source code". The correct term for software that passes your test but does not meet the Open Source Definition is "source available".
The OSI made a huge mistake in choosing to use an non-trademarkable borrowed term as their own trade industry term. The original (and quite long standing) use to refer to publicly available texts is still widely used, and English isn't a prescriptive language outside of legal frameworks like trademark. This is why you really should pick a trademarkable name when you try to define trade marks.
If that meaning is "intuitive", why was it not used before the Open Source Initiative introduced their definition? The competing uses are the ones co-opting an existing phrase.
Ironically their policies are why I want to move there with my American dollars. I want to live somewhere that cares about my rights, not the rights of corporations.
That's fine, but don't complain when you lose access to products and services that are widely available elsewhere.
In particular, restrictions on ML models will leave you without access to extremely powerful resources that are available to people in other countries, and to people in your own country who don't mind operating outside the law. Copyright maximalism is not, in fact, a good thing, and neither is overbearing nanny-statism. Both will ultimately disempower you.
You have to realize that as an individual, you have no power anyways
It doesn't matter if an individual personally has access to ML models, because government and/or huge corporations will ensure that individuals cannot use them for anything that would threaten government or corporate interests
This unfettered explosion of ML growth is disempowering all of us. Those with power are not using these tools to augment us, they are hoping to replace us.
This unfettered explosion of ML growth is disempowering all of us.
Never mind that I've gotten things done with ChatGPT that would otherwise have taken much longer, or not gotten done at all. If this is what "disempowerment" feels like, bring it on.
Although the tech is nowhere near ready to make it happen, I would be very happy to be "replaced" by AI. I have better things to do than a robot's job. You probably do, too.
EDIT: The HN title was changed, which previously made the claim. But as HN user swyx pointed out, Tencent is also claiming this is open source, e.g.: "The currently unveiled Hunyuan-Large (Hunyuan-MoE-A52B) model is the largest open-source Transformer-based MoE model in the industry".