Great. I posted two forks, if GitHub takes action then I've got standing to sue RIAA (under the declaratory judgement act), which I'd love to do pro se.
Btw, if anyone is associated with any fork that did go down and is interested in bringing a pro se case, feel free to contact me. I'm not a lawyer and can't give legal advice but I can help point you at some helpful laws and cases. I've been fighting false infringement claims in court for over a year on other issues. Also take a look at the complaint in https://torrentfreak.com/riaa-sued-by-youtube-ripping-site-o..., although it has some issues I noted at https://news.ycombinator.com/item?id=24902619.
If I actually end up filing suit I'm sure I'll post about it here and I can get press coverage. Although I'd prefer to do it pro se and keep costs down, I've always wanted to do a pro se case but never had the opportunity yet (all my cases have been big enough it would be dumb not to hire a lawyer.)
Still remains to be seen if others that have already been taken down will start filing cases. There's at least one case that I linked above, although it's not directly related to youtube-dl.
Why would you have a stand to sue RIAA if GitHub would have blocked your account? I think you may only have a claim against GH, but I have not read T&C well enough, but I can imagine they would have an entry enabling them to block anyone for any reason. I don't think you could do anything about it.
RIAA made a claim that some content violates their rights under DMCA, which in this hypothetical causes my account to be blocked. I can bring a case under the declaratory judgement act against RIAA, asking a court to decide whether the content is actually infringing. See https://en.wikipedia.org/wiki/Declaratory_judgment
I may also have a tortious interference claim, although damages are likely negligible so it's probably not worth bringing any claims other than that declaratory judgement one.
You really should talk to a lawyer because it sounds like the entire basis for your protest is grounded in a misunderstanding of how the DMCA provisions work.
I can tell you based on what you've posted in this thread that you absolutely do not have a tortious interference claim unless you've entered into contracts you haven't previously mentioned.
I've talked to plenty of lawyers and spent plenty of money on intellectual property issues. I know what I'm talking about. A declaratory judgement suit is in my view the proper avenue to address a tenuous IP claim.
Like I said, tortious interference is not a great fit, but it's still applicable - interference with my contract with Github counts as interference. If it was an actual DMCA notice then the only remedy would be a 512(f) suit (plus declaratory judgement) because DMCA preempts other causes of action (caveat: [0]), but it's not a DMCA notice.
[0] Note that some jurisdictions haven't explicitly held that DMCA preempts state claims such as tortious interference, and at least one IP lawyer I've spoken to has brought state claims in such a jurisdiction.
For starters, the first link is for a motion to dismiss, A motion to dismiss merely assesses whether the plaintiff has made sufficient claims in its pleading to keep the case going. It's not a substantive ruling on the merits of anything. And second, it was related to a "tortious interference with business expectancy claim" which required the plaintiff to have a business that was interfered with as a result of the defendant's false trademark violation claim, and that harm actually resulted from the defendant's false claim. (The case is still going, and both sides have filed competing motions for summary judgment.)
The second case was a tortious interfence case in which the defendant also made false allegations of counterfeiting, which led to Amazon removing the plaintiff's products from the store, so I'm assuming you brought it up because you are going to argue that the RIAA will make factually false statements of fact against you if you make a copy of the youtube-dl repository.
If that's your plan, good luck, because you're going to need it. You're definitely going to want to hire better lawyers, too, if your current lawyers have advised you that you could beat the RIAA in these circumstances.
I'm well aware of the differences between a motion to dismiss, summary judgment, and awards at trial. A ruling on a motion to dismiss shows that the law applies to a situation as alleged in the complaint.
As I said in the comment you originally responded to, damages are likely to be negligible, which is why I wouldn't bring such a claim.
You didn't explain what objection you had to the concept of tortious interference as it relates to a takedown request, so I just checked my database for examples of successful tortious interference claims at any stage. (I'm following over 100 similar cases where people or companies harmed by false infringement claims sued.) To the extent you were objecting because I lack a relevant contract, I don't see why the GitHub contract isn't sufficient. The biggest issue is damages.
And as I said above, the plan is to file a declaratory judgement claim, which definitely applies if I can get standing.
Whether a tortious interference claim would work depends on a number of factors including the exact details of a hypothetical RIAA takedown request, and as I said I don't plan on pursuing one anyway.
Well, I would strongly suggest you speak to a lawyer on it, but I would note this strategy only really gives you an American judgement, and the RIAA takedown would still apply in any other territory.
I don't know if Github would host content where they felt there were issues in some countries and not others. Would they geofilter it as they do with Iranian contributors due to US sanctions?
It's not, you can literally swap it out for very similar anti-circumvention laws in virtually every Western democracy and indeed almost every functioning country in the world.
The comment is wrong though. There is no need to settle the issue globally, it's a US-only issue. As being a WIPO member or signing a treaty doesn't create laws. Being a WIPO member doesn't do much of anything actually, it's all just talk, that's why there are so many members. Instead countries have sovereignty, pass and implement their own laws regarding copyright typically requiring elected politicians to propose them and vote on them, independently from WIPO and treaties. They also have their own judicial systems, most countries not even doing a common law style system where things can ever be "settled", it's more about court orders, rather than some clever demands written by lawyers.
So, when it comes to copyright, most of the world is really not like the US. A random WIPO country is likely not just lacks anti-circumvention copyright laws, but also lacks a lot of other things in them.
RIAA did not allege infringement. They alleged a violation of section 1201, e.g., publishing or otherwise trafficking in copyright protection circumvention technology. No copyright infringement is required to violate DMCA 1201(a)(2):
17 U.S.C. 1201 Circumvention of copyright protection systems
(a) Violations Regarding Circumvention of Technological Measures.
(2) No person shall manufacture, import, offer to the public, provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that
(A) is primarily designed or produced for the purpose of circumventing a technological measure that effectively controls access to a work protected under this title;
(B) has only limited commercially significant purpose or use other than to circumvent a technological measure that effectively controls access to a work protected under this title; or
(C) is marketed by that person or another acting in concert with that person with that person's knowledge for use in circumventing a technological measure that effectively controls access to a work protected under this title.
It's not illegal to create copyright protection circumvention technology and in some circumstances it may be legal to use it. However, 1201 says it's illegal to offer it to the public or otherwise traffic in it.
The EFF on behalf of Prof. Green is not arguing "non-circumvention". They admit Green has performed circumvention. They argue 1201 as applied to Green's work is unconstitutional.
Again, the RIAA is not alleging circumvention. They are alleging publication/trafficking in circumvention technology.
It's not a direct match, but that's to be expected in an underexplored area of law.
To get really precise, I'd probably argue along these lines:
1. Declaration that YouTube's rolling cipher isn't an effective access control or copy control
2. Declaration that my actions don't count as marketing in violation of 1201(b)(1)(c), or alternatively that any marketing was not in violation of that section.
3. Declaration that my fork of youtube-dl isn't primarily designed or produce for the purpose as in 1201(b)(1)(a)
4. Declaration that youtube-dl has substantial commercially significant purpose or use beyond circumvention on copyrighted content, as per 1201(b)(1)(b)
I don't think access controls is a winner, although I'd probably throw in the analogous declaratory judgement counts for 1201(a)(2) just in case.
Depending on how RIAA defends it, I could add declaratory judgement counts addressing their specific claims.
They might have good cause, but that doesn't mean they'll win in court.
As I said in many of the threads on this topic, I think the RIAA's letter is in good faith, and it's not plausible to get damages from them, but they could still lose on the underlying issue in court.
>"People should move their youbube-dl repositories to servers hosted in Switzerland, "
>"Time for a decentralized version control system?"
>"The nice thing about fossil"
... those well-meaning suggestions are missing the true difficulty: The community wants a (1) Schelling Point[0] for workflow/issues/discussions/PRs that's also (2) censorship resistant. So far, (1) and (2) contradict each other's goals.
Nobody has come up with a technology solution that satisfies both goals. Yes, Fossil has has discussions built into the repo, but fossil is not a Schelling Point. Yes, one can run Gitlab on a self-hosted Raspberry Pi from a home internet connection but that's also not a stable Schelling Point because ISP like Comcast can shut that IP down for DMCA violation.[1] And SMTP mailing lists also ultimately depend on a server that holds the discussion archive and a well-known address for new users to send a "add my email" request. Thus, the existence of a well-known server becomes a specific target for RIAA/DMCA takedown.
As for other "uncensorable" technology ideas such as IPFS, Freenet, blockchain, etc. I haven't seen any proof-of-concept from other projects that demonstrates similar easy-to-use collaboration of Github. Remember, it's not about the raw git repo... it's about the Focal Point for the collaboration workflow.
I’ve heard this over and over, but I never understood it. How is Git supposed to be decentralized when you push and pull from one place? Would you just set up multiple “origins” with each one pointing to a head maintainer’s computer (and named appropriately)? Because that sounds like a massive pain compared to a centralized system like GitHub.
Of course, there is the benefit of a VCS built on the idea of decentralization: not having to be online to make commits (cough SVN cough). But, in theory, could a centralized system work in “offline mode” like Git?
With git, "push/pull from one place" is merely a usage/behavior convention by collaborators. Nothing about git requires push or pull, and nothing about git requires that you only use a single "canonical" repository.
The key point about git is that every copy of the repository is semantically equivalent to every other copy. The one on your local machine is a fully fledged repo just like the one on your colleague's machine and just like the one on github or wherever else a "canonical repo" is hosted.
This is utterly different from older systems like svn or perforce, where the "central server" is semantically distinct from whatever you have on your machine.
At any time, you could decide that a different instance of the repo is canonical (shared push/pull), and that's what make git decentralized.
> I’ve heard this over and over, but I never understood it. How is Git supposed to be decentralized when you push and pull from one place?
Git is a _distributed_ version control system, not a decentralized version control system. This simply means that users of the repository download the entire repository onto the machine they are making changes from. Distributed version control systems have no opinions on whether the source of truth for a given repository is decentralized, distributed, or centralized.
People push and pull from one place by convention but it was designed to pull from anywhere. The problem is PRs, issues, code review etc has become centralized. As far as the code goes they just need to host all code at their own domain and use GitHub and other services as a mirror.
"Fully offline capable" would be a better term than "decentralized" when describing git.
But, one relatively decentralized workflow that git has good support for is sending patches via email (git format-patch, git am). I believe the Linux kernel still does a large amount of its collaboration that way.
I once demonstrated git format-patch and git am to someone and it was a mind-blowing moment for them. At least it was a good mind-blowing moment, unlike that link above.
You don't push in something like git, but you also don't try to do mesh. You interact with maintainers and their trees, but not necessarily always the same.
You clone a maintainers personal tree (which can be anywhere), make the change, and email them the patch with git send-email. They then apply it with git am -3. Trusted maintainers might tell another maintainer to pull their tree from wherever they put it, but they can place it wherever in the moment, including just on their own machine - it only needs to work for a single fetch as well, and could be a new location every time.
FWIW, this is often how I interact with open-source projects. Fork my own copy on Github, clone that locally, and spin up a branch for changes/PR. But then also `git remote add upstream <them>` so that I can keep my own code up to date more easily (`git merge upstream master` from master, and then trickle that into my PRs).
Git is not truly decentralized, in the sense that you still need to provide a few specific remote hosts, all of which could disappear due to law enforcement. Instead of one centralized server you've moved to maybe 3-4 centralized servers, which isn't that helpful against the issue at hand, especially if those servers are all within the US, Europe and other law-enforcement-cooperating countries.
In a true decentralized system (e.g. Bittorrent, Bitcoin, etc.) it's practically impossible to delete something, and there would be plenty of accessible copies of data in jurisdictions that are out of reach of any single country's law enforcement, and even out of reach of the original creator to delete, so the original creator's government would not be able to force them to delete it.
Another HN user pointed out Freenet to me in another post, which sounded a lot like the right answer, but I haven't researched it in detail yet and whether it could be easily adapted to work with the git command to actually version repos in a decentralized fashion.
> Git is not truly decentralized, in the sense that you still need to provide a few specific remote hosts, all of which could disappear due to law enforcement.
Some time ago, I and a friend worked together on a project in a place where there was no centralized git server; both of us ran git-daemon (https://git-scm.com/docs/git-daemon) on our desktops, and we often pulled from the other as we developed each feature. It was fun, and worked really well.
Git is truly decentralized, in that each developer has a full copy of the code and its history, and any of these copies can be used as the source for other copies.
> In a true decentralized system (e.g. Bittorrent, Bitcoin, etc.) it's practically impossible to delete something, and there would be plenty of accessible copies of data in jurisdictions that are out of reach of any single country's law enforcement, and even out of reach of the original creator to delete, so the original creator's government would not be able to force them to delete it.
That's how it works with git; everyone who cloned the repository to their desktop (for instance, to develop and contribute a patch) has a full copy of the history up to that moment, and that copy is out of the reach of both other jurisdictions and the original creator. There's no way to even know how many of them there are, much less who has them.
> each developer has a full copy of the code and its history
But this doesn't solve the problem of how arbitrary public should locate and access those copies. If the code isn't accessible by its intended users, it effectively doesn't exist.
This is a horrible misunderstanding of decentralisation in the context of git.
Git was designed to be distributed in the sense that people work on a local copies of their code which they then reintroduce into the project.
Projects themselves are not supposed to be decentralised, there is supposed to be a single point of truth which represents the collaborative effort of everyone working on it. And that's what needs to be hosted somewhere, and everyone needs to agree on it. What even would be the point of any kind of version control if you have hundreds of competing top level codebases?
> And SMTP mailing lists also ultimately depend on a server that holds the discussion archive and a well-known address for new users to send a "add my email" request.
This is not entirely correct. The fault lies in the assumed importance of the list.
When I send a kernel patch, I send it to the maintainer, then CC anyone I myself think should be involved, and finally CC the relevant lists (plural). The list acts as feed for lurkers, and as an archive.
The mailing list server being down will not affect this workflow in the slightest. It's a minor inconvenience at best, and mostly to those that were not part of the development flow and just wanted to lurk.
>The fault lies in the assumed importance of the list.
The "importance" of the mailing list I'm trying to emphasize is the Schelling Point. This is the social discovery aspect you're not addressing.
Let's dissect your following example:
>When I send a kernel patch, I send it to the maintainer, then CC anyone I myself think should be involved, and finally CC the relevant lists (plural). The list acts as feed for lurkers, and as an archive.
The list also acts as a Schelling Point to coordinate and consolidate public discussions.
And consider the seemingly trivial action of "I send it to the maintainer". We have to unpack some hidden steps in that. How does a new contributor -- or a new non-contributing non-programmer enduser who just wants to file a bug request -- get a list of maintainers' email addresses ex nihilo[1]? A mailing list also has a Schelling Point aspect where people can discover email addresses of maintainers.
Right now, the yt-dl.org About[2] webpage is broken and therefore doesn't show the maintainers. I don't know the email address of the current youtube-dl maintainer and Google search results[3] don't quickly reveal it.
- "github.com/ytdl-org/youtube-dl" acted as a Schelling Point
- "yt-dl.org" also acted as Schelling Point
How would new people easily discover the maintainers' email addresses without any Schelling Points?
>It's a minor inconvenience at best, and mostly to those that were not part of the development flow
I disagree. Today, if I notice a bug where youtube-dl is broken, I don't have a canonical place to report the issue. Somebody said the official maintainer moved the repo to Gitlab -- but when I search Google for "youtube-dl gitlab" and click the top link[4], the repo owner says "I'll wait to see what happens with the GitHub repository and the current maintainer before I do anything with this clone."
Thank you for posting these responses, because this conceptual material around "Schelling Points" and such is incredibly useful - and tends to be a blind spot within the tech community.
(Curiously I've also found that "theory of mind" issues are a real struggle with programming circles - i.e. keeping track of who is aware of what information, or especially, not realizing that just because you, yourself have learned a fact or built a thing, that that info isn't automatically 'pushed' to everyone else.)
> The "importance" of the mailing list I'm trying to emphasize is the Schelling Point.
You have said Schelling Point enough times for me to have caught that. It is, however, not correct to assign this to the mailing list.
(Also, as a side-note, "focal point" is a more natural, common, and much less awkard term.)
> The list also acts as a Schelling Point to coordinate and consolidate public discussions.
No, it does not. However, it does serve a similar (albeit unimportant) function: providing visibility into ongoing discussions for outsiders, as well as a way for people to join a discussion there were not otherwise introduced into by fellow maintainers.
As a point of reference, during the handful of kernel patches I have gotten upstreamed, the mailing lists themselves served no function. My only interaction with them was to Cc them, as per the guidelines. I did not read them, and all my interaction was with the individuals I To: or Cc:'d directly.
A key thing to note here is that historical discussions on a mailing list are not particularly important in mailing-list style development, as all relevant information should be in the commit message that the discussion culminated in. Likewise, being able to chime in on ongoing discussions are, on its own, not truly that important either—in the worst case, something bad is implemented, and someone upon discovering this starts a new discussion and possibly end up changing it.
In big, or long-standing projects, this happens relatively often, as no individual has the bandwidth (or interest) to keep up with every discussion, and relies mostly on being introduced to discussions when others find them relevant (with git history revealing if they are relevant).
> A mailing list also has a Schelling Point aspect where people can discover email addresses of maintainers.
> How would new people easily discover the maintainers' email addresses without any Schelling Points?
> I disagree. Today, if I notice a bug where youtube-dl is broken, I don't have a canonical place to report the issue.
These are all the same argument, which I will consolidate as: "You cannot discover the target address if the mailing list is down".
This is entirely false: the mailing list does not serve the function of email address discoverability in any form. Instead, the source is the source code itself, in which the target email addresses will be recorded ad nauseam.
Not only will you as holder of the source have the email address of every historic contributor and maintainer (including knowledge of who is relevant to what area), there will usually also be explicit listings available (README, maintainer lists, etc.). External sources might redundantly cover the information as well, but that is non-critical.
To carve it out: You, as a holder of the source code, will always be able to send patches, ask questions, as well as start discussions that others will organically be added to, using no other sources of information than the source, irrespective of mailing list availability, as long as you can send email, and that at least one of the maintainers or contributors of the source can still receive email.
>the mailing list does not serve the function of email address discoverability in any form. Instead, the source is the source code itself, in which the target email addresses will be recorded ad nauseam.
Not every project records email addresses in the source code. Yes, the project maintainers could do that, but some don't.
I'm looking inside the latest "youtube-dl-2020.11.01.1.tar.gz" source archive and it does not have maintainers' email addresses in it. The main Python source code file "YoutubeDL.py" and other .py files do not have email addresses. The archive also has a text file "AUTHORS" for historical credits -- but no actual email addresses. I suppose the current maintainer could explicitly put his email address in that source, but he/she didn't. (Maybe there are good reasons to avoid putting an email address in the source code.)
>To carve it out: You, as a holder of the source code, will always be able to send patches, ask questions, as well as start discussions that others will organically be added to, using no other sources of information than the source, irrespective of mailing list availability, as long as you can send email, and that at least one of the maintainers or contributors of the source can still receive email.
Based on what you wrote, it seems you still have not fully internalized the difficulties of a Schelling Point for something like Youtube-dl. Even if "youtube-dl-2020.11.01.1.tar.gz" contained the actual email address of the current maintainer, if that maintainer decides to quit because of RIAA hassles and someone else (unknown to us because we can't predict identities of future authors) takes over and creates a new "youtube-dl-2021.03.15.1.tar.gz", the old "tar.gz" file does not point to the existence of the new file or new canonical location. The world still needs a Schelling Point so people know where to download the latest trusted non-malware version of Youtube-dl even if they don't know the identity of the new maintainer. You claim the "source" can be the starting point ex nihilo but you still need a trusted Schelling Point to obtain the source in the first place to enable the subsequent steps of private email correspondence that make a mailing list server irrelevant.
How do people get the latest "youtube-dl-202x.xx.xx.x.tar.gz" without a Schelling Point?!? Your answer: start with the source code.
My response: isn't that circular logic?
I can only guess that you're using mental model of Linux kernel development. The problem with trying to transfer that collaboration workflow over to youtube-dl is the Linux project does not have adversaries like RIAA trying to kill it. Schelling Points are stable in that world of mainstream projects and seemingly trivial. E.g. just read the Linux kernel .h/.c file for email addresses.
> Not every project records email addresses in the source code.
You have missed the point entirely.
The email Git workflow is based around—you guessed it—git. Git, being designed entirely around email-based workflows, stores the email authors and committers of every commit (they often differ), and furthermore, often also marks the reviewers. This is fully automatic.
If you are looking for email addresses, you either A) do not have the prerequisites for doing the work in the first place, or B) do not understand git.
Any Git tree that has not been intentionally butchered will contain the email address of every single maintainer and contributor to ever have contributed to the project, and that is always the only thing you need when doing email workflows.
(Linux stores more than that, for convenient official subsystem maintainer lookup—but focusing on the extra processes that a project that in 2019 alone had over 74k commits from 4k authors with over 5 million lines of code changes needs is a little silly when we're talking about comparatively tiny projects where git info is fine.)
> How do people get the latest "youtube-dl-202x.xx.xx.x.tar.gz" without a Schelling Point?!?
You are derailing significantly. If you have not yet figured out how to obtain a git tree, then you have clearly gotten ahead of yourself with most of your arguing. How can you be discussing how to create issues if you don't even know how to get the software in the first place to discover issues with? How do you even know of the software if you had not already found a place which could contain the starting point for your process?
Nothing with relation to workflows and VCS has any important before you manage to get ahold of the software in the first place, whether that is on a website, forge, IPFS, USB dongle smuggled by a packer over a border, or whatever you'd like.
The starting point, however, is nothing more than that. It serves no purpose once "recruitment" is complete, as you work on your machine-local repo from there. You may occasionally sync with other individuals trees, in what can be considered peer-to-peer transactions, but even that is mostly optional, and it can be done with any individual. You do not need full consensus in the tree.
(The tarballs you refer to are not for development. They're only releases for end-users, which cannot be used to make patches from.)
> I can only guess that you're using mental model of Linux kernel development.
The development model are used for many other projects, including a handful of mine.
If you think that there are no adversaries out to kill Linux, you'd be sorely mistaken, but it's moot as it has no single points of failure. Everyone works on their own trees, and send around patches, occasionally syncing in various directions.
---
The fact that you have now assigned several disconnected components as focal points underlines the fact that there are, in fact, none. I have also come to the conclusion that you are not very familiar with neither git itself nor the email workflows it was developed to sustain.
Granted, there will always be central individuals, in the form of the maintainers of the subtree you are contributing to (note: there can be any number of maintainers and subtrees for a given project). This is not due to infrastructure, but due to code changes losing meaning if arbitrary patches are applied in arbitrary order. But, there is no single point of failure and no single point of gathering.
But, even then, they are not focal points. The various maintainers act as focal area of a particular subtree, but they are organic and redundant, and loss of one is is easily routed around as any holder of a tree can accept patches and act as a maintainer. And loss of one implies loss of an individual, not loss of a random hosting service.
>How can you be discussing how to create issues if you don't even know how to get the software in the first place to discover issues with? How do you even know of the software if you had not already found a place which could contain the starting point for your process?
Why is this question so strange to you? The end user usage of a tool can be much more widely known than the workflow development of it.
There are only 3 obvious web pages: (1) Home (2) Download (3) About
From there, you can download some binaries to use the software but there is no mention of any source host to enter a "git clone" to then later do a "git shortlog --summary --numbered --email".
It is way easier to get the end user software to run than clone the git tree. For some reason, your reasoning assumes they are always tied together but youtube-dl is an example where they are not. (It sorta used to be tied together when the youtube-dl webpage's "dev repository" link to Github actually worked. It doesn't work now. See the "ex nihilo" problem?)
And btw, a git clone doesn't help for sensitive (aka legally uncertain) projects where contributors don't enter real valid email addresses and/or the use fake email to avoid spam. (This is a general statement and not a email address validity test of youtube-dl maintainers specifically.)
Also, a "git clone" is a barrier for non-programmer end users who just want to report a bug. The Github repo (or mailing list) was a logical place (Schelling Point) for that type of activity rather than cloning a repo that the end user doesn't need.
>The starting point, however, is nothing more than that.
But some starting points matter more than others.
>If you think that there are no adversaries out to kill Linux, you'd be sorely mistaken,
This is not engaging with the spirit of what I wrote. Surely, you realize we were talking about adversaries such as RIAA/MPAA/DMCA using legal threats for takedowns and not business competitors such as Microsoft that tried to squash Linux.
>, but it's moot as it has no single points of failure. Everyone works on their own trees,
This is a 100% true statement about no SPOF that is not relevant for this youtube-dl discussion. We already know hundreds (thousands) of people have already git cloned youtube-dl and some have even re-uploaded copies back to Github. But all that copying misses the point of all the various threads in the last week where people wonder where the "blessed repo" with PRs/issues/etc will live that's resistant to censoring. Repeating the "no SPOF nature of distributed git" is not making progress in that discussion.
So you're going to take a project that does not at all use emails in their workflow as an example for email based workflows? Brilliant.
Have you perhaps considered that maybe, if a project actively used emails in any part of their process, maybe addresses would have been more visible?
If only we had the means to write arbitrary relevant text on a page such as the one you linked...
> But some starting points matter more than others.
It objectively does not. Everyone starts once. It only matters for recruitment. However, as you yourself seem to accept "youtube-dl.org" as an acceptable starting point, which can contain arbitrary information in any form, then there is nothing more to discuss here.
> ... is not making progress in that discussion.
... Like every argument you've made so far.
As this point, I'll consider the discussion as over by three-fold repetition. Everything you have presented has already been answered (directly or indirectly), and the counter-scenarios are all faulty (e.g. discussing doomsday scenarious with no ability to share maintainer addresses yet accepting a website, using a non-email based project as reference for an email workflow, yadda yadda).
How about a copy of GitLab hosted as a Tor hidden service? (The core problem there being that someone has to be willing to host it, of course, as this isn't truly a decentralized approach to the problem, and so maybe one day gets outed and something seriously bad happens to them... but, practically?)
The hosting for the server is irrelevant if it is relatively stable (does not get compromised every few days) and you have alternatives to move to if it gets taken down.
The true single point of failure is the domain name, but sites like sci-hub seem to be able to manage that rather easily too. The domain name sometimes has to change but people still seem to be able to find it easily.
> Yes, one can run Gitlab on a self-hosted Raspberry Pi from a home internet connection but that's also not a stable Schelling Point because ISP like Comcast can shut that IP down for DMCA violation
Agreed, however this is where something like a Tor hidden service can help out, assuming something as innocuous as YTDL doesn't draw the ire of three letter agencies, which is very possible given that the FBI investigates copyright infringement.
I'd be curious if Mastodon could be extended to support this. It's not entirely censorship resistant but it at least diffuses the ownership and control.
Git will happily push/pull to/from repos on your peers' computers, and you could just, you know, email a list of people instead of depending on a central mailing list for discussion. It'll suck to onboard new people but it's entirely possible to have a Git project + discussion without a single central point of failure.
Github isn't Facebook, there's hardly any network effect involved (the actual network is the World Wide Web itself).
The main issue is all the information that is Github-exclusive (like the "issues"), which AFAIK is extremely valuable for youtube-dl, and also AFAIK cannot be (easily) exported. But then developers should have know better than to put all their eggs in someone else's (Microsoft's !!) centralized and locked basket.
If Pirate fricking Bay is still up after all these years, so can be youtube-dl repository/ies.
EDIT : probably an even better example : Popcorn Time
A Schelling point is a solution that people converge on without communication. But since youtube-dl already has a lead maintainer, a website, and plenty of other ways of coordinating on a solution, I don't see how the concept of "Schelling point" is relevant.
>But since youtube-dl already has a [...] I don't see how the concept of "Schelling point" is relevant.
There are multiple Schelling Points. The specific Schelling Point that's uncertain is a widely known place that everybody has converged on for bug reports, discussing issues, pull requests, etc. The Github repo used to serve that purpose but DMCA took it down.
Yes, the domain "youtube-dl.org" is _one_ Schelling Point, but there was another Schelling Point for developers' collaboration and Github removed it. Someone said the maintainer moved the repo to Gitlab but the one found by Google search doesn't seem canonical/blessed/official. Even if it is canonical, Gitlab could also remove it because of a DMCA takedown.
There doesn't seem to be a slam dunk obvious (and easy) solution for creating stable Schelling Points for developers to collaborate on legally dubious projects like Youtube-dl that's the target of RIAA.
You could always use a newsgroup for patches/issues/discussion. Those are notoriously hard to sensor and usually can be posted to/read by normal mail clients.
I think some people are missing a major point about youtube-dl's takedown. The purpose of this DMCA claim wasn't just to prevent people from downloading videos, it was to set a legal precedent for the RIAA and similar organizations to troll Github and anyone hosting public repositories with it, by linking them to "illegal downloads".
By agreeing to this without any public opposition, Github has become similar to Youtube itself, in that it will be expected to immediately comply with à la carte takedown requests by RIAA and other copyright trolls from now on. It doesn't matter that the actual reason to take down your large repository is political or it simply bothers certain company for miscellaneous reasons, since all it takes is some "illegal content".
GitHub is already expected to comply with DMCA takedowns just like they have to comply with counter notifications. That's how they maintain a safe harbor, though the latter is a bit harder to truly enforce (see the recent Twitch kerfuffle).
Sorry if it was a bit tongue-in-cheek. The RIAA can now call Github out whenever they have an argument for them enabling downloads of copyrighted content through their repositories. And Github is expected to fix it for them.
I'm not sure how this is any different than how it's been for years now. GitHub has been complying with DMCA takedown requests since 2011[1]. What precedent needed to be set here? The RIAA could always do this and GitHub has been complying with it (because they legally have to).
A lot of the things listed on the DMCA repo (if not most of them) are not software that could be used to download content, but the literal content you could obtain using them, such as textbooks, proprietary fonts, and so on.
By taking down youtube-dl, Github has opened a window for copyright trolls to speculate about what a repository is used for, and take it down if one of its use cases involves downloading copyrighted content.
I can see how it's legitimate to take down a repository with the sole, explicit goal of downloading copyrighted content, but youtube-dl is widely used by journalists to preserve uncopyrighted content.
I can see how it's legitimate to take down a repository with the sole, explicit goal of downloading copyrighted content, but youtube-dl is widely used by journalists to preserve uncopyrighted content.
Youtube-dl is used by some journalists. Generally, most journalists at a real newspaper or media organization would ask for the original video, not a copy from youtube.
Even your local evening news will use the original video unless they are specifically discussing the online version of the video, in which case they will either record footage of the screen using one of the hundreds of cameras they have lying around or just share their screen view directly.
The only time journalists use youtube-dl for reporting is when it is not possible to get a copy of the original.
> Generally, most journalists at a real newspaper or media organization would ask for the original video, not a copy from youtube.
In earlier HN threads about this youtube-dl takedown, it was pointed out that sometimes authors of videos used youtube-dl to download their own videos, because they don't keep their own original; treating YouTube as a repository for their data, much like people use Google Drive, or GitHub.
So even when a real newspaper or media organisation asks for the original, those authors needs youtube-dl to retrieve that original. Or maybe they'll just tell the media organisation to download it themselves as there's no difference.
> youtube-dl is widely used by journalists to preserve uncopyrighted content.
How? Virtually all content is copyrighted; the journalists would need to be restricting themselves to some very narrow topics like "government documents".
The supposed claim wasn’t that it could be used for piracy (despite many here parroting that claim), but that they advertise that it could be. I haven’t read the actual claim, but it seems they’re basing their argument on a block of test code that uses a video containing RIAA copyrighted music. It’s sketchy at best, but there’s a reason BitTorrent clients make no mention of piracy sites at all; The MPAA and RIAA are extremely litigious and will use any opportunity they get.
Perhaps bittorrent clients don't mention specific sites but there's no end of related projects that do, e.g. expand the three "Supported ___ Trackers" lists on https://github.com/Jackett/Jackett
I'm not aware of any notices filed against that specific project.
No they haven’t. This is NOT a traditional DMCA take-down. It relies on a new legal argument using a different section of the law which has a much more perverse impact.
Then again pretty much any policy can be made unenforceable if a sufficient number of people decide to make it so.
Technologists with their ability to automate are at a distinct advantage here again.
I'm just waiting for youtube-dl to move development to Tor or i2p or another censorship resistant network - and ship with a built-in client to update from there.
Probably the lawyers will move on when the thing becomes hard enough to use and gets contained to few techies showing off to their few friends and calling it a victory.
The issue is that anything that scales needs an infrastructure and infrastructures can be extralegal only if they are not worth the time of the guys with the guns.
> Probably the lawyers will move on when the thing becomes hard enough to use and gets contained few techies showing off to their few friends and calling it a victory.
youtube-dl is a tool mostly used by techies anyways, and they specifically took down the code repository. On top of that youtube-dl is a commandline tool. This is not your typical "google how do I netflix" end-user.
You can bootstrap an application that is hosted on tor or i2p with a short bash script: 1. download and start tor/i2p, 2. download the application, 3. run it.
youtube-dl already has a built-in updater (youtube-dl -U). You'd just change that to use tor/i2p as well.
Anyone who can't get such a script to run wouldn't have been able to use youtube-dl anyways.
> youtube-dl is a tool mostly used by techies anyways
This is just speculation, but I think that there may have been a tipping point reached where it is no longer being just used by CLI "techies". It has been incorporated into other tools which appeal to a much wider userbase - many of whom will never know they are using it behind the scenes.
For example, there is: a user-friendly front-end for it in Kodi [1], an easy to use web interface [2], a polished desktop app [3], etc.
I suspect it's not so much the original CLI which they care about, but the lowering of the barrier to entry for "everyone else".
Presumably the point of the action is to disrupt the ongoing development of youtube-dl, not delete the code from the internet. I don't know much about the codebase, but I assume that YouTube could put time into breaking compatibility if it was known that the project had lost momentum and couldn't keep up. It takes a lot of effort to keep up with a well funded organisation - see NewPipe, etc.
Of course you don't need GitHub to run an open source software project, you could conceivably do it with IPoAC[1] if you really wanted to. But in the real world, being kicked off the most popular platform of its kind, and losing the community and integrations that come with it, is going to slow the project down.
There’s nothing stopping you from writing your own downloader too. It’s not about making video downloads impossible, they are probably gunning to disrupt the development of the project so it would not be able to keep up with the changes of the video websites, make it painful to use, thus eventually insignificant.
decentralization doesn’t work without a centralized tool. Torrents are nothing more than a clever idea without The Pirate Bay & similar, Bitcoin is worth nothing without the exchanges.
You can create your own TPB by just crawling the DHT network for torrents and downloading the torrent metadata from active peers. In fact I did this once as a fun weekend project.
Also you pretty much gave a counterexample to your own argument by bringing up TPB.
The "people with guns" would love nothing more than for it to go away, yet here we are - 17 years and counting.
As I said, when it becomes insignificant enough no one cares. As with TPB, it’s blocked in many places and service is down quite often. A few people bother to use it, thus guys with the guns don’t bother.
It’s the same situation with your homebrew unlawful stuff. As long as your impact is not worth the attention of the guys with the guns you can do whatever you want.
In fact "the people with guns", as you like to call them, do little else day-in day-out than try to scrub google results and other places, because they do care, yet they are losing badly.
As long as one can google "thing" + "torrent" and come up with something in the first few result, it's pretty obvious who isn't winning.
Then there's also the actual statistics for torrent usage...
Sure they can take down a pirate bay or some other tracker now and again, but at the end of the day for each site they take down there will be 10 new ones.
And that's what I'm talking about when I'm saying "pretty much any policy can be made unenforceable if a sufficient number of people decide to make it so."
The best efforts of those people with guns are proverbial drops in the ocean.
What always happens when laws and reality aren't in alignment, is that people ignore those laws en masse.
> Torrents are nothing more than a clever idea without The Pirate Bay
No, there are magnets now. You can send a torrent as a hash.
The reason it won't work is because you can just take that hash, log all of the swarm ips, send secret letters to all of the isps to reveal the seeders and leechers, bust in their doors and arrest them, publicize their arrests in every newspaper and on every television channel, interrogate them to expose more of the network, then if the law lags create parallel constructions to convict the most important people as the ringleaders of a copyright-violating cartel, and let the rest languish in jail for a little while before dropping the charges six months later and releasing them after their names have been in the paper.
In Germany it is quite common to get a fine for that (an adhortatory letter or Abmahnung in German).
Media companies monitor torrents, and as soon as they see a new German ip address seeding, they send a letter to the ISP with the IP address and timestamp. The ISPs send the law firm the real user address and then they send you a letter demanding a fine for illegally uploading copyrighted material often between 1000-3000€. If you don't pay up, they take you to court.
This is quite common and many people see letters even if they only seed for a few seconds. Apparently the easiest way is to just ignore the letters and pretend you don't live there but I wouldn't be so sure (a friend told me that's what he did and it allegedly worked but I have no experience with either).
So this is a lot less drastic but still quite as effective. And it even makes VPNs risky because disconnecting and exposing your real IP for even one second is enough to receive that letter (though not all torrents are monitored and private torrents are usually safe because the German RIAA doesnt have access to it).
I recommend you take a look at the Wikipedia about "Abmahnung" how that is even possible when the copyright holders don't sue themselves: https://en.wikipedia.org/wiki/Abmahnung
My experience is that instead most people switched to Netflix, Spotify, YouTube (et al.) because they were more convenient and cheap enough, and the MAFIAA mostly stopped caring about torrents.
And people that were worried about getting fined started using VPNs/seedboxes.
I still haven't heard of anyone in my circles getting a fine.
It used to happen in the U.S. quite frequently in the days of Kazaa/Limewire. If you got caught pirating a movie, your ISP would cut you off after 1-3 strikes.
It doesn't happen as much anymore because it's now easier to get a high-quality version of the movie through legitimate means than through illegitimate ones.
That's very different then the implication of a SWAT team busting in your door and carting you off to prison for some unspecified amount of time on account of copyright infringement.
Yes, the 4 horseman of the infopocalypse. To put it bluntly, if your system can't protect criminals it can't protect regular users either.
Heaven forbid we expect the police to actually do their job without having to spy on all of us all the time. Is expecting competence from them so much to ask?
No, js is fundamentally a technology with an anti-competitive ethos enforced by the fact you need an army of engineers employed full time to keep even a trivial site up and running, and on top of that it has surveillance built into it's core. You can't be anonymous using it by design.
Apps built on top of tcp that don't have persistent state, like signed emails, don't suffer from either of those problems and are fundamentally decentralized, anonymous and more user friendly once you try and use them 5 years after they were created.
See, anything about sharing needs centralisation. It's not about JS or whatever language or framework or whatever makes this thing tik.
The need for centralisation is an inherit weakness that can't be solved by technology. Sure you can share something among friends, then your centralisation point would be your school or workplace or something too touch heavy to attack and scale, so no one cares.
There are plenty of alternatives that don't require centralization, they are just unpolished because there is no money in them.
Again, github only exists because for some reason my generation of developers has decided that js is the only way to use the internet and a pretty interface makes up for not knowing the tools you are using.
Block chain is the only one in your list that is actually decentralized, and it winds up de-facto centralized anyway because it's too slow.
BitTorrent is centralized around the search engines, because a content-addressable storage system can't help with content discovery. You need someone to delete torrents that claim to be video games but are actually ransomeware.
FreeNet is the same as BitTorrent, except that it has built-in support for updatable "freepsites," so the content directories are built into the protocol instead of being run as regular websites. Also, it's anonymous, and kinda slow. It's actually really cool, and horribly underrated, but it doesn't actually solve the schelling point problem. Which freepsite do you trust to have factually correct information?
People should move their youbube-dl repositories to servers hosted in Switzerland, where people are not criminalized when downloading content and making copies for personal use (even when copy protected).
I hate to break it to you but distribution of tools related to bypassing DRM/technical protection measures (which is what the RIAA takedown letter actually cited) is illegal in Switzerland and has been since 2006.
youtube-dl does not circumvent any protection measures, so hosting it should not be illegal in any country, including Switzerland.
The only problem here is the absurdity of how the DMCA and similar works in the US, which allows companies to easily spam invalid claims and require content removed unless the victim puts in the work to prove the claim invalid. And, of course, that legal cases tend to be settled not moral and innocence, but by the lawyers paychecks.
> youtube-dl does not circumvent any protection measures, so hosting it should not be illegal in any country, including Switzerland.
That's a different argument to "Swiss law is different to the US and permits this" though. Bluntly the international treaty wording that is integrated into all national laws is vague enough that I just don't know if what youtube-dl does would be found to trigger it. I certainly wouldn't gamble my own freedom on it. I also wouldn't instruct on it if I were the RIAA tbh.
Honestly I suspect if you put it to courts in various territories a hundred times you'd probably find it came out about fifty fifty. It's a badly drafted law, but unfortunately many laws are badly drafted and only clarify themselves through precedent and we don't have enough of that here.
> The only problem here is the absurdity of how the DMCA and similar works in the US, which allows companies to easily spam invalid claims and require content removed unless the victim puts in the work to prove the claim invalid. And, of course, that legal cases tend to be settled not moral and innocence, but by the lawyers paychecks.
Not especially. Ultimately if you have a free hosting service you shouldn't be especially surprised they're not willing to gamble the literal freedom of their staff for you to go to bat on legal cases. You get what you pay for. The DCMA doesn't have a takedown regime in these circumvention cases, it's just that Github becomes jointly liable when it becomes aware, and have obviously looked at it themselves and decided they are not willing to take the risk.
In that regard, the law is pretty similar in the US to pretty much every member country in the world that belongs to WIPO.
So if you open devtools in your web browser and go to the Network tab, where you can manually download all audio and video files -- congratulations, your browser just circumvented protection measures, because it interpreted some javascript. I guess we have to take down browsers too now. This is completely retarded.
It's like closing the door and hanging the key next to the door for all to see. Definitely not an effective technical measure to protect the house.
EDIT: And not to forget that Youtube hosts many CC-BY licensed videos which explicitly allow unlimited use (incl. decryption if necessary), and that Youtube has contracts with copyright collectives in most countries so for any user it is save to assume that the video is legally uploaded (and thus download and store for personal use is legal in most countries).
No, because in the U.S. we have this thing called "reasonableness."
Obfuscating the cache file effectively controls the work by preventing most viewers from making a permanent copy. It takes technical knowledge to overcome that measure. (Go ahead, ask any non-techie you know to download a copy of a video from Youtube without using a tool like youtube-dl. They'll fail.) That's a reasonable measure; it would prevent the overwhelming majority of the population from copying. It doesn't matter that it wouldn't protect against the entire population; in the real world, keys, safes, bike locks, etc. don't stop determined individuals either.
And the outcome, if youtube-dl wins this case? Youtube and other sites that feature substantial amount of copyrighted content will have to change the technical details of how they store temporary video content, ultimately rendering youtube-dl useless for all videos, not just the ones guarded by the RIAA. Win the battle, lose the war.
> It takes technical knowledge to overcome that measure. (Go ahead, ask any non-techie you know to download a copy of a video from Youtube without using a tool like youtube-dl. They'll fail.)
Go to the Taylor Swift music video, open devtools, go to the Network tab and you can download everything from there. The only caveat is that you have to manually merge audio and video files, and youtube-dl does this for you. Other than that, youtube-dl does basically the same thing.
You claimed that the DMCA does not require the technical measure to be highly effective, or even that it be effective at all and I quoted from the law which says that a technological measure that effectively controls access to a work protected under this title. Now you come with yet other arguments. Now I would quote yet another law or decicion which contradicts your statement and so on.
> (and thus download and store for personal use is legal in most countries).
That's a highly questionable statement. Certainly, most European countries do not have such a law in all circumstances, and it's dependent on if the personal use copy is made of an asset you are licensed to have permanent access to (which you aren't of a collectively licensed work) or a broadcast for timeshifted purpose (which an on-demand asset isn't).
> if the personal use copy is made of an asset you are licensed to have permanent access to
Apparently you're in a Common Law country. In European countries making a copy for personal use is generally legal, unless the user should have been aware that the content was uploaded illegally; in the case of Youtube, the user may generally assume that the content has been uploaded legally, because Youtube has concluded contracts with most copyright collectives and the compensation is made in the form of a flat-rate surcharge on all storage media.
In Switzerland you may even legally download illegal content for personal use and also circumvent a copy protection for the purpose of creating a personal copy. EDIT: and did I mention that I studied law in Switzerland?
YouTube makes these videos freely available. Downloading them is completely legal. (So is videotaping them with an external camera, or anything in-between.)
youtube-dl has circumvention code needed to download copyrighted music from major publishers, which have a slightly different cipher than the rest of the site. They even had integration tests for this code path.
I don't think youtube-dl violates any of the points in the referenced paragraph.
EDIT: also note that it states "effective technological measures" ("Umgehung wirksamer technischer Massnahmen" in German); if there is no (effective) protection measure such as in Youtube then the referenced sanctions obviously don't apply (nulla poena sine lege).
Indeed, what is an effective technical protection measure is highly debatable.
The only problem is the only likely chance of definitively finding out in your territory is to have a criminal action against you and see if you win in court.
> the only likely chance of definitively finding out in your territory is to have a criminal action against you and see if you win in court
The plaintiffs would have to prove that the videos that can be downloaded with youtube-dl are protected by an objectively effective technical measure and that the tool circumvents this protection, that the accused was aware of the effectiveness of the measure and the illegality of his actions.
I studied law at the University of Zurich (Switzerland) and would be calm in face of such a criminal prosecution. The plaintiff's risk of litigation would be considerable and the defendant would have little to fear.
And the action is also on thin ice in the USA. The assumptions on which the takedown is based on are very questionable. I am curious to see how the proceedings that have now been initiated will turn out.
No, the RIAA notice is on pretty solid ground in the U.S.
You'll note that organizations like the EFF haven't weighed in on this like they normally do when content owners get overzealous with enforcing IP controls, nor have many of the usual IP law commentators.
And more importantly, the youtube-dl creators would have already filed a response if the RIAA notice was as week as so many people in this thread have claimed; they've had a week to do so. The fact that they've yet to respond indicates that either they or the lawyers they've spent the last week talking to are having difficulty finding a response to the RIAA's notice. (This doesn't mean that they won't file a notice, just that the issue is not a piece of cake like so many people on HN believe.)
Filing or not filing a counter-notice isn't just a question of who is legally right, but also of who can afford a litigation. I doubt the youtube-dl authors have the budget to fight this.
"Youtube-dl is a legitimate tool with a world of a lawful uses. Demanding its removal from Github is a disappointing and counterproductive move by the RIAA."
That isn't a counter claim, it is a lawsuit by a company that isn't affiliated with youtube-dl and the claims of the suit predate the youtube-dl incident.
In 2019, the RIAA got a court order to have Google remove the Yout domains from search results. Yout is arguing that their service provides time shifting, which is fair use, and their domains should be restored on Google search. Plus, they want damages.
FYI -- Yout has been blocked from several ISPs in Europe because courts have found their service to be illegal stream ripping and infringes copyright.
Are you hard of hearing? The link you posted has the filing clearly visible in the video. I read it. All claims are based on actions taken in 2019. Not one mention of the youtube-dl DMCA violation.
Interesting thought. It would definitely call bluff on these bully tactics by GitHub.
GitHub's deplorable behavior might very well be illegal, but fat chance of anyone dragging the company to court over it, considering how much that would cost and the amount of lawyers that Microsoft would put on such a case.
Yet again a tech company making up their own laws and (so far) getting away with it. One might wonder, if anything ever substantially changed in the Wild West.
Or they just don’t want to lose their safe harbor protections afforded by the DMCA. And to avoid that, they have to do things like this. They’re also not breaking any laws by removing user content that their TOS say they have the right to remove.
This is an area where GitLab could have a huge advantage. If they had really good support for mirroring projects between instances then running your own private instance for control while mirroring to GitLab.com for scale / convenience would make a lot of sense.
Of course I’m sure that would go into the super mega platinum “we’re only charging 1/5 of the real value” BS tier or whatever they’re pushing these days.
Yes, but having a complete copy of all my projects in a matching self-hosted environment mitigates a huge amount of risk because it would still be there if I got banned from GitLab.com. I could still use the workflow and tooling I'm used to. Contrast that with GitHub where losing your account means losing the workflow and tooling you depend on.
I'm not talking about trying to re-host a controversial project though. I'm more interested in the risk to the average small developer where getting banned would ruin your life, but wouldn't make the news.
The only way GitLab would escape DMCA would be to host in China or Russia (which also would get it banned by many of their customers). The US has many IP agreements with European countries and even some Asian countries, good luck escaping from it.
No; it could e.g. be hosted in Switzerland; when assessing the case Swiss law applies; possible agreements with other countries are not directly applicable, but are reflected in the national law.
EDIT: and in contrast to e.g. Germany or apparently the US there is nothing like a cease and desist letter in Switzerland, but claims such as the present one must be brought before the court.
A lot of EU countries have explicit copyright exceptions for personal copies though due to the storage tax, so youtube-dl is legal in those countries anyways.
It wouldn't be legal in any of those territories - those explicit copyright exemptions don't apply to content that you're not licensed to have permanently or broadcasts.
European law also requires member states to have laws against breaching technical protection measures, and bypassing those isn't usually legal irrespective of personal copy exemptions. And in this case Youtube-dl was taken down because it was claimed to breach technical protection measures.
> It wouldn't be legal in any of those territories - those explicit copyright exemptions don't apply to content that you're not licensed to have permanently or broadcasts.
> European law also requires member states to have laws against breaching technical protection measures, and bypassing those isn't usually legal irrespective of personal copy exemptions. And in this case Youtube-dl was taken down because it was claimed to breach technical protection measures.
That's actually the opposite, there's an interoperability exceptions for DRMs unlike in the US, which protecting a video certainly falls into since you would not be able to read it.
That's a different problem, this is judging the legality of DRMs. Copyright holders unfortunately still have a right to add DRMs into their media but consumers also have a right to break them for their personal copy.
I personally think DRMs should be plain illegal since you have a right to copy for your personal use, that would make more sense, but here we are, that's some middle ground.
on the contrary: bypassing technical protection is only illegal in the eu if done for illegal purposes, not if done while staying within consumer rights.
I don't know if it is ironic, but the vcs is decentralized. The main issue is project & code management (bug tracking, issue, planning, merge request, review UI, ...). But here we also have a lot of solution : Gitlab, Redmine, Savana, ... But Github give your project exposure to a lot of potential contributor, and tend to be used as an extension of your CV (many recruiter contacted me because of my activities on GH ...). Lets be honest, the social network side of GH is one of its very important side.
> the social network side of GH is one of its very important side.
Exactly, and it's always the social network side of things that causes trouble when it's centralized. It's exactly our social interactions that we need to be free from central control in order to have any real freedom.
Those are still centralized, i.e. there is one host that belongs to some person or org that's the main upstream repo and which can be hit with a takedown reqeuest. Git itself can be used in a truly decentralized way by having the main repo on IPFS or the like, though.
Edit: Also your sibling posts have pointers to projects with real decentralization, such as ForgeFed.
What do you mean with "real" decentralization? Git is as decentralized as you can get. You don't need IPFS to make it decentralized, it's already there. It was designed that way.
A lot of people are confused about this because GitHub and GitLab are the only intermediaries they've used to collaborate with git. If that applies to you and you would like to know better, read this: https://drewdevault.com/2019/05/24/What-is-a-fork.html
I was pretty clear in what I said: Gitea and Gitlab are centralized.
Git itself is neither centralized nor decentralized by design, it merely provides an infrastructure for replication and is agnostic about whether or not there is a "center" to the space of replicas. It depends on how you use it... but in practice the way it is used in nearly all projects is that there is one "main" (authoritative) upstream repo for a project, and that creates a point of centralization unless you put that repo on IPFS.
Yes, you could also completely forgo having a "main" repo, but how are the users then to find the "latest version" of your project? Who are you going to fork from, how do you discover collaborators? Git does not provide mechanisms for solving these problems; they're outside its scope. But some of the P2P and federated technologies such as IPFS do. With IPFS you can have a main upstream repo that's advertised without there being a host / person / org that can be hit with a takedown request. And other federated projects mentioned in this thread address other questions around collaboration that Github (as well as Gitea and Gitlab) address in a centralized fashion.
There is a difference between talking about a decentralized system, or about decentralized hosting of a centralized system. You are talking about the latter, but that is irrelevant if you have the former.
There is no "latest" version of a project in the decentralized model. As soon as there is one "latest" version that rules them all, you have centralized an aspect of the project and that immediately becomes a target for organizations like the RIAA. IPFS does not help here. Because in the end, someone must have the exclusive right to release a new "latest" version. This release mechanism becomes a single point of failure.
Sure, you can use use IPFS to create multiple copies of that new version to avoid having them taken down. But you also could just do `git clone` on a new server, as so many already have done. Creating copies and putting them up for download is not the problem.
If git is used in a true decentralized way, everybody has their own version of the project. No two have to be alike. Do you want to use youtube-dl to download porn, get the version with the right extension. Do you have moral objections to porn, get the version without them. Exactly like there is no "one" Linux, each distro has its own version, with the patches they think are important. This way, the RIAA could even release their own version of the tool. One that can download from Youtube, but not video's that are subject to the copyright they want to uplhold.
Users would not have download the tool directly from that one developer/team, they could download it from whoever they trust to have their best interests at heart. That is what decentralization means.
Git has no idea about a "the main upstream repo", that's a Github-et-al invention. Git will happily push/pull to a bunch of other peoples' repos with no "main" or central part at all.
with fossil you can use the built-in http server to quickly clone a repo, then just do `fossil server` to run a new host. Easy to decentralize the interface as well as the code.
Better to refer to them as 'code forges', the central locations where clone your git repo from, and that have additional features such as issue tracker, team boards, and what-have-you..
> decentralized
The code forges themselves (github, gitlab, gitea, sourcehut, vervis, etc.) are not decentralized, but the open ForgeFed protocol, under development at https://forgefed.peers.community (an extension of ActivityPub), facilitates decentralization of forges.
---
TL;DR Talking about decentralizing GH / VCS always leads to (correct) reactions such as 'git is already decentralized'. Code forge interactions need to become decentralized too.
This is just a federation protocol, so it doesn't really solve the problem: federated systems are just a way to make it easier to interoperate between a number of separate instances, but every individual "thing" (in this case, a repository) is still going to be hosted on an instance that can be taken down (and sure, it could be put on another one, but then you didn't solve the problem); what you want is a "truly decentralized" (the word "truly" in parentheses was a critical part of the comment you were replying to) and really hopefully "distributed" revision control alternative, not merely a mechanism to "enable interoperability" (the phrasing from their website) between existing services (which should make the issue clear: if no existing service could host the repository in question, then interoperating between existing services doesn't suddenly cause stable hosting to appear).
The nice thing about fossil is that you could have the website / documentation as part of the repository, then simply clone the repository and run `fossil server` over and over to decentralize the entire network with the built-in http server. Some kind of solution that makes it _easy_ for people to decentralize makes it more likely that it will be decentralized when needed.
They haven't been "ordered" anything by anyone who has the authority to issue an order. They have been requested to take it down, but this request arguably has no force under the DMCA because the RIAA is not the copyright holder of the material requested to be take down. Even if youtube-dl could be considered "circumvention technology" the DMCA safe harbour provision does not apply to that.
It may not be surprising or even "evil", but at best it is excessively cautious for a company who profess to serve the open source community and want/need the goodwill thereof. Clearly this is the corporate owner (Microsoft) behind Github showing its hand.
It's not up to github to analyse the DMCA validity beyond "does it match the format and include required information". Youtube-dl owner can do it and request an instant revert if they're happy to take the responsibility.
No service provider is going to fight a specific case of DMCA for a third-party.
As I have seen argued on several places thus far, including at the link below, this alleged DMCA complaint has been contrived at best and certainly falling outside of Safe Harbor doctrine. Meaning, the "immediate action" requirement on GitHub's behalf, doesn't appear to even exist here.
With most companies, Hanlon's razor would probably apply ("never attribute to malice that which is adequately explained by stupidity"). But in this case it is rather dubious (if not shocking) how GitHub's legal department appears to have completely failed to do their basic job.
As a company, GitHub at least has a legal duty to know the law, or be held accountable for the consequences of their actions when they monumentally fuck things up (while doing serious harm in the process).
Either way .. with Microsoft's track record in mind, GitHub's silence on legitimate questions regarding the validity of the RIAA's actions, and now this warning of inflicting harm to people if those dare to not play along with the companies dubious legal decisions .. all in all, I'd say that GitHub has long lost any benefit of a doubt by now.
With every day passing, this appears to be shifting more towards plausible (if not likely) malice. On top of that, this isn't a light matter either. Most likely a serious anti-trust violation and might even constitute a criminal enterprise. Not that I expect US authorities to do anything about it, considering its own dubious and dismal track record, for at least several decades.
Technically the DMCA does not match the format and/or include all required information. From what I've understood a DMCA MUST include a list of the works being violated. The RIAA also failed to conform to GitHub's standard format, although using a standard form if available is perhaps something you SHOULD rather than MUST do..
From a substantive perspective, you're thinking of a different area of the DMCA, for noticed related to copyrighted works. The RIAA notice at issue here was based on 1201, for tools used to circumvent copyright, and so it's not necessary to disclose the works being violated.
From a formatting perspective, format is generally irrelevant to notices like this because the law does not state a particular format must be used. Thus, any format which conveys the necessary information is generally acceptable.
Just because it's part of the DMCA doesn't mean it is allowed to be used in a take-down notice. Or at the very least such a notice wouldn't have to be construed as such. At least that's what I understood from this article: https://news.ycombinator.com/item?id=24888234
And while formatting is somewhat irrelevant from a legal standpoint it is still one of the tools Github uses to ensure that their take-down requests contain all the required information. Acting upon a request that doesn't conform to the format is risky.
As someone who once filed a DMCA request against something, I can tell you that they requested additional verification that I was the copyright holder. Whether that was wise of them or not, it was not to a small company at all.
The problem with the DMCA is that the anti-circumvention clauses are vague at best, and unfortunately it's easy for well-funded corporations to make claims against it.
There is nothing in the law that talks about the effectiveness of the anti-circumvention tech, so even if the tech is trivial to bypass, if that bypass is done via "unintended usage" (according to the rights holder, not you) then they can claim you are doing something illegal.
YouTube obfuscates the URL of the video stream for videos which have been identified as containing copyrighted material and are licensed for streaming only through the YouTube Web player or app. The copyright owner has made the material available under a license which permits ONLY that.
Youtube-dl contains code to decrypt the URL of the video stream in order to download and save the copyrighted material, in violation of the license.
Now let's look at some definitions from the DMCA:
(A) to “circumvent protection afforded by a technological measure” means avoiding, bypassing, removing, deactivating, or otherwise impairing a technological measure; and
(B) a technological measure “effectively protects a right of a copyright owner under this title” if the measure, in the ordinary course of its operation, prevents, restricts, or otherwise limits the exercise of a right of a copyright owner under this title.
We have to ask ourselves two questions. First, is the URL obfuscation scheme a "technological measure" that “effectively protects a right of a copyright owner under this title”? Yes, it is. In the normal course of its operation, it prevents the user from accessing the work except to stream it through YouTube's Web player or its app. The bar set by the law for "effective" protection is very low. If your "DRM" is a simple XOR cipher whose key is "hello world", then it "effectively protects a right of the copyright owner under this title", provided that it does its job of restricting access to anyone who doesn't know the key.
Secondly, does youtube-dl circumvent protection afforded by the technological measure? Yes, it does. The technological measure enforces certain policies by means of its decryption algorithm; youtube-dl reimplements the algorithm, but does not enforce the policies, allowing people to get around restrictions demanded by rightsholders like the RIAA and implemented by Google. Does youtube-dl have limited use other than to circumvent protections afforded by the technical measure? Yes, it does. If the URLs were not obfuscated, a simple HTTP client might suffice to download the video data. The primary use of youtube-dl is to get around the measures Google had put in place to restrict access to YouTube content (the same for other video sites).
Therefore... youtube-dl is a circumvention device prohibited under the DMCA, and is therefore ILLEGAL. Distributing it is a felony under federal law, punishable by up to five years in prison.
Go ahead. Ask any copyright lawyer. I'm sure they'll tell you the same.
Out of curiosity, do you have any insight about how this works when applied to other mechanisms and tools?
YouTube-dl has a lot of use cases, among which are the downloading of potentially copyrighted works, in a way which seems similar in my mind to the way a VCR has a lot of use cases including, possibly, the recording of TV shows in violation of a license.
Why are VCRs permissible to distribute but not this software?
Because VCRs were deemed to be a tool used primarily for fair use in the Betamax case. The analysis that made VCRs possible (and DVRs, for a while) wouldn't hold up today.
In a nutshell: When Betamax and VHS VCRs came out, tv shows generally aired once, did not repeat, and were not available for purchase on an individual basis. Recording the show to watch later (aka, "time shifting") was the only means some people had to watch an episode.
And importantly: because the episode was not otherwise available (i.e., it wasn't sold or rebroadcast) there was no financial harm to the content owner for copies being made.
DVRs like Tivo relied on this logic as well. And it worked, until the studios caught on and began making film and TV content available for purchase on an individual basis, and offering the content for transitory consumption (aka streaming). By doing so, they eliminated the "time-shifting" rationale of the Betamax case (by offering downloadable copies, albeit at a higher fee, they've also addressed the "connectivity shifting" rationale techies keep bringing up). And indeed, that is why Tivo and other companies stopped selling standalone DVRs in the U.S. and Europe a few years ago. (All of the DVRs you can find today in the U.S. are offered by cable companies or the subscription service providers (Hulu, Youtube TV, etc.) pursuant to streaming and time-shifted viewing licenses they have with the studios.)
Thus, even if the Betamax case was still binding precedent (it hasn't been since the DMCA), it wouldn't apply today.
> Why are VCRs permissible to distribute but not this software?
1) Because the Supreme Court -- in a narrow 5-4 decision -- ruled them so in the "Betamax case" after the MPAA tried -- and up to that point succeeded -- to have them banned as tools of copyright infringement. The MPAA was sore about this for decades afterward, maintaining that the SCOTUS was simply wrong as a matter of law. My girlfriend's stepfather was a high-powered attorney for the MPAA, and he banned the use of VCRs in the house because to use them to record off the TV was, in his view, unquestionably copyright infringement, SCOTUS ruling or no SCOTUS ruling. And he was one of the foremost experts in the country on this area of the law.
2) The Court, in its Betamax decision, said that "[T]he sale of copying equipment, like the sale of other articles of commerce, does not constitute contributory infringement if the product is widely used for legitimate, unobjectionable purposes. Indeed, it need merely be capable of substantial noninfringing uses." However, the DMCA explicitly closes that loophole. Any tool or device whose primary purpose is to circumvent copyright protection is illegal to use or distribute -- save for certain carve-out exceptions including security research, interoperability research, and law enforcement use -- irrespective of whether it has "substantial noninfringing uses" as would, for example, a tool to strip out Denuvo protection from your legitimately purchased copy of Doom Eternal so you can play the game without Denuvo pegging your CPU. The fact that DRM is an irritant that owners of legitimate copies may wish to be removed is immaterial; providing the means to remove it is still a crime.
Are you aware that the decryption you are talking about is simple JavaScript execution and that according to this interpretation, even browsers are in violation?
You can't host something publicly, available via common, general purpose technology and then claim protection when someone uses this general purpose technology to obtain it.
The RIAA would have more of a case if they hosted this on their own site and made it accessible only through their own special purpose tooling. It would still be stupid, but then they could at least claim a RIAA Player(tm) is required to access the content.
> You can't host something publicly, available via common, general purpose technology and then claim protection when someone uses this general purpose technology to obtain it.
I'm not saying this is wrong, but how do you know?
That is, do you claim to have a grounded understanding of the law? (Perhaps as a lawyer, or a layperson who's studied this in some depth.) Or are you simply saying that in your opinion the law should consider youtube-dl's decryption acceptable by this reasoning? Or something else?
(I'm not even asking for citations here, if it's the first thing.)
A layperson who's studied this in some depth. This is obviously my interpretation of the law since I haven't tested it in courts. However, if it did come to court, I think this would be the likely conclusion.
I'm also interested, though: can you imagine a phrasing or argument that would invalidate the sentence you quoted? How would the restriction be framed? Would it mention a list of concrete programs which you can to access the website? Can RIAA for instance, in your opinion, mandate that a website can only be accessed through Chrome?
Fair question. I don't necessarily have something specific in mind, I just have a generally high prior of things like: the law is complicated; DMCA is complicated; things don't necessarily mean what they sound like they'd mean; arguments like "if you allow this you can't possibly forbid that, what on earth are you thinking" are unreliable at best[1].
But the sorts of things I could imagine going wrong with your argument might be...
* Yes, browsers are in violation of the thing bitwize quoted, but it doesn't apply to them for reasons written elsewhere.
* Yes, browsers are actually in violation of DMCA. People probably noticed this when the law was being written, but no one listened to them. If anyone tried to enforce DMCA against browsers, DMCA would get overturned, so no one's going to try. (I think this is unlikely - if there was an argument that browsers violate DMCA, I think I'd probably have heard of it. Probably. But including for completeness.)
* Browsers need Javascript engines for many many reasons. Youtube-dl (afaik) needs a javascript engine specifically to get around this obfuscation. That could be relevant somehow. (Similar reasoning might say that locksmiths are allowed to own lockpicks and no one else is. I believe the law has roundly rejected that. But that doesn't mean the law would reject this, too.)
Again, not claiming any of these actually apply. Just, this sort of thing is why I'm hesitant to make inferences that seem otherwise sensible.
I very much appreciate your thoughtful response. I think the points you raise are fair, but none of them strongly and clear-cut in RIAA's favour. In fact, I would call them weak and cannot imagine RIAA would want to really press this matter in court using any of them.
To answer very shortly: to my knowledge, there are no such exemptions for browsers specifically. Point 2 would actually work against DMCA, as you observed. I think point 3 is defeated by the fact that there are websites which require JS support to initiate video reproduction but which do not use it as an obfuscation technique.
Instead, RIAA is counting on this matter not to reach the courts and everyone submitting to their will out of fear, which ever so slightly modifies public opinion on the matter and pushes the Overton window.
From my experience, in cases like this the law turns out to be somewhat arbitrary and devolves into "whatever the judge(s) of the highest order think". It is extremely important not to get self-defeatist at this point and argue aggressively for the outcome you want to see play out.
Of course, I am not sure how this would in fact play out in court, but I think no one is. If anyone is aware of a concrete fact which makes my reasoning outright invalid, I invite them to cite it.
If browsers evaluate all of the JavaScript and implement the access controls, they are not in violation.
However, a tool which selectively evaluates the JavaScript to decrypt the content URL without implementing the rightsholder's policies would count as disabling or evading effective protection under the DMCA, and thus be illegal. Browsers are not such tools; however, distributing software (say, a GreaseMonkey script) or instructions on how to make a browser evaluate the decryption JavaScript and download the video without implementing the policies would also be illegal.
Why is this so hard to understand? The law is very clear: when a protective measure is set up, be it in JavaScript, Visual Basic, or anything else, anything that gets around that measure is a crime to distribute.
Are you aware that the decryption you are talking about is simple JavaScript execution and that according to this interpretation, even browsers are in violation?
No, they're not. The Javascript was designed to run in the browsers as authorized user agents so that a user can view the video.
You can't host something publicly, available via common, general purpose technology and then claim protection when someone uses this general purpose technology to obtain it.
Yes, you can. Literally, the entire point of copyright and IP law is to incentive creators to make their creations public by protecting them when they do so.
> No, they're not. The Javascript was designed to run in the browsers as authorized user agents so that a user can view the video.
What are "browsers"?
> Yes, you can. Literally, the entire point of copyright and IP law is to incentive creators to make their creations public by protecting them when they do so.
Sure, but that has little to do with the topic at hand since copyright is not being challenged. Clearly it is the RIAA's intent for the content they publish publicly to be available publicly.
The actual DMCA takedown was just an excuse. I'm sure Microsoft's top brass would love to give the gift of killing youtube-dl to their friends at Google. Sort of a "you scratch my back I'll scratch yours". Maybe YouTube will stop begging users to switch from Edge to Chrome.
I'm fine with this. It reminds everyone that Github is a potentially malicious layer-on-top of git that can bite you, even if they seem to be on your side.
Sure, slap a clone on GH for a bit of extra traffic - but keep your main activities somewhere under your control.
I kind of worried that some of the value of GitHub's data is derived from knowing which developers browsed/pulled/forked/contributed to different projects, and whether or not they or their employers could be sued for breaking licenses, infringing copyright or patents, etc.
For example, Microsoft might want to know which of their former developers contributed to say WINE or ReactOS. A patent troll might want to know who pulled code that allegedly infringes their patents, so they can audit and sue their employers. The RIAA might want to know which businesses use YTDL, even if they're using it under fair use, and so on.
Is it not the right time to decentralise those type of services? I think it is. Isn't this site owned by Microsoft? So if it is, this is normal big corporation behaviour. They embrace open-source community to gain advantage. Is there a way to host code without being dependent on big corporation? I am really interested in the answer of this question. Public programming code must be treated like public utility. For example like water, it must be ensured that water is drinkable and properly managed, but in public domain. Is anyone remember Nestle attempt to privatise water? (https://www.insidehook.com/daily_brief/news-opinion/nestle-b...).
Github is not the only player, and there are plenty of other options. Git itself is decentralized, so that's half the issue solved. Gitlab, Gerrit, and a bevvy of other web front ends for git exist, some open source, some closed.
Github is still the biggest, but Gitlab is probably a close second.
>They embrace open-source community to gain advantage. Is there a way to host code without being dependent on big corporation?
No. This is you and others being completely oblivious to law. DMCA is US law AND the US has trade agreements with other countries to similarly enforce US IP law.
It does not matter if it's Microsoft or a small startup. Either they comply with DMCA or the RIAA (and others) can sue for millions in damages.
Take issue with the law and push your elected officials to change them.
Linus gave us git to get rid of centralized repos and we web devs invented github to centralize it all.
Sometimes I feel we web devs behave worse than fashion industry when it comes to blindly following random trends, making it all religious instead of logical and messing up all innovation
Why would some brunching lawyer know what yt-dl even was unless someone who hired them was making a political play against it?
Given youtube-dl is not copyrighted material the RIAA would have in its mandate, that it replicates functionality that every browser already has, and GitHub is a property of MSFT - this is basically a legal harassment play that implies we're going to need more than decentralized repositories. I'm thinking something like Kali Linux but one that tags certain code repositories as politically exposed and then mirrors them into a distribution via torrents.
Below is a link to Leonard French, a practicing copyright attorney, talking about the situation. According to him, the problem is whether or not yt-dl was designed and marketed to bypass copyright protections. Still a shit situation (re:browsers and screencaps) but it seems like this is more or less how the law is written. IANAL.
The original DMCA request stated that they violated copyright by suggesting songs in their source code. While this is a dubious copyright claim, surely the easy solution is to uphold the claim by simply replacing the songs with open source items such as big buck bunny in the source?
There are all sorts of legit uses. There are some videos I've uploaded that I've long ago lost the original source to. Also, if I wanted to go home and see my parents, it's probably better if I download the ted talks, youtube videos etc that I might like to watch on my personal connection than theirs.
Valid question! On first thought, you would think the only reason to use it would be for media you don’t own (if you own it, you’d already have a copy). And you are probably correct. However there are valid reasons for doing so: many people have posted here that they use it for journalism (fair use) among other things.
I do remember years back, LinusTechTips admitted that their video archives were just on hard drives scattered around the building. So if they needed a clip from a previous video, they’d just download the video from YouTube and “crop” down to the clip they needed.[a]
[a]: They mentioned this in a video where there were building their “Vault” server that eventually turned into “Petabyte Project.”
I use it to watch YouTube videos in MPV. I have a machine where Chrome won't start and Firefox can't reliably display video without dropped frames, but MPV runs perfectly.
In the case that you were more concerned about the legality than the morality of it, you have to remember that downloading freely available videos on YouTube is perfectly legal. (Of course sharing the downloaded files is not, unless you have a copyright on them, or if they are in the public domain)
Btw, if anyone is associated with any fork that did go down and is interested in bringing a pro se case, feel free to contact me. I'm not a lawyer and can't give legal advice but I can help point you at some helpful laws and cases. I've been fighting false infringement claims in court for over a year on other issues. Also take a look at the complaint in https://torrentfreak.com/riaa-sued-by-youtube-ripping-site-o..., although it has some issues I noted at https://news.ycombinator.com/item?id=24902619.