"We filed comments this week, explaining that the DMCA is generally working as Congress intended it to. These provisions allow platforms like the Internet Archive to provide services such as hosting and making available user-generated content without the risk of getting embroiled in lawsuit after lawsuit. We also offered some thoughts on ways the DMCA could work better for nonprofits and libraries, for example, by deterring copyright holders from using the notice and takedown process to silence legitimate commentary or criticism."
And if you want the plain text version of the above:
Or, is that pretty much "what Congress intended?"
1. Safe Harbor for websites, means you don't have personal liability as long as you comply
2. Ability for copyright holders to get stuff taken down quickly
1 is working fine, 2 is subject to abuse. The post says that 1 works.
Do you have a source that says they were forced to do it by outsiders? Or any plausible way in which the DMCA required that?
Edit: also, content ID is automated checking of uploads. The ability for copyright owners to take down content directly is under a different program, I think.
Google is basically volunteering to allow people to issue take down requests without consequences; if it was an actual DMCA take down, they would have to
And yes, Content ID is another system entirely where Google pattern-matches uploads (and old stuff when they feel like it); when hits are found, Google tends to reasign the monetization to the "content owner" that registered with Content ID. Getting that money back has generally been impossible, even in cases where Content ID clearly screwed up (false positive).
If you're interested in the history of how Google has been using Content ID, there is an older discussion from a few years ago when Google really started to abuse Content ID hard.
So YouTube introduced content id to show that it really was taking copyright serious, and not just using plausible deniability.
Viacom mostly lost the case because there never was much better evidence that YouTube employees knew about individual copyrighted videos on the site.
Google probably keeps the system to avoid new suits and to make business partners happy. A lot of Google content isn't crowdsourced anymore, it's uploaded by big business suppliers who are essentially business partners. I imagine those contracts include content id to be used.
How would "notice and staydown" (vs. takedown) work with systems like IPFS which use content hashes? Would there be centrally maintained blacklists against which which all hosting companies would need to screen inbound content?
In general, the reason copyright enforcement is pushed for is not accurate enforcement of laws, but rather maintenance of business models while paving over "minor details" such as legitimate contracts and licenses between copyright holders and others.
The digital copyright regime has effectively paved over centuries of intricate law to create a binary of "free" and "nonfree", with no internal distinctions or intermediates. That this is not widely recognized is a sign of how effectively media conglomerates control perception of the issue.
You simply can't justify unilateral global takedowns on copyright grounds. Many legitimate countries disagree about copyright rules, and they just aren't so simple as a global "copyrighted?" flag. If you want to be a global moral police, you can justify blocking hashes for those reasons as long as there are no collisions and everyone agrees, but that doesn't sound terribly likely either.
So for example, I'm sitting next to a teacher waiting for a train and reading the newspaper. The teacher sees a story and says hey, can I have that when you're done so I can make some copies for classroom use?
That kind of copying is clearly fair use, right? But the same thing happens on the internet with some kind of hash-based copying prohibition and the teacher can't copy the story from me. Because the computer has no way to know that the law allows the copy. So it can only allow everything or prohibit everything.
That sort of system can't work. It doesn't have the information or context or logic necessary to make a fair use determination. But "prohibit everything" is exactly what Disney et al want, so they're always pushing for it anyway.
Unless you're convinced that there is never a legitimate fair use case, feel free to substitute whichever you like.
Offload the copyright risk to those more willing to take it and you get to keep doing your friendly neighborhood scraping.
But.... why should we tolerate copyrights? What is in it for me?
I kinda like the idea of people doing paid work. If someone wants a something-for-nothing kind of formula they should pay for it themselves rather than creating an impossible burden for others (if not the whole world)
One can get enough funding before creating a work or before releasing it. After release the audience can make donations and/or chose to fund future works. This should be good enough for what we need.
Before we chose/prefer global mass persecution until the end of time over the crowd funding formula we should first have a good reason for it. Failure to preserve history for that extra bit of entertainment exploitation is not worth it. Not just because it fails to be entertaining.
I can't think of any but there might be a few works important enough for copyright to be an ideal formula but we can't expect it to work on the scale we are having it on right now.
At the very least a lack of license should default to something like creative commons. (If I rub a bit of snot on some paper I don't want to own the rights to it.)
If enough people want a copyrighted work they are going to get it anyway. The "dream" of artificial scarcity with infinite exploitation has ended. We have to write realistic laws now. Something that doesn't violate basic logic.
We the audience would gladly pay for a new season of Star trek. I suppose the fear here is that the audience would have influence on the programming?
Privacy is hard in the internet age.
You can't say something on live television and then ask them to delete the tape for privacy reasons, well you could but it would be dumb.
You can't expect privacy when you have a meeting in a glass room I guess is what I'm saying.
When you are meeting in a glass room, you can't expect privacy in the moment, but if someone takes the intellectual property you shared in that glass room and puts it somewhere else, you certainly can expect them to respect the laws that allow you to require them to stop. If they don't, you should expect law enforcement and/or the courts to assist.
The broadcaster would typically own the copyright to the video. I am sometimes given specific waivers to sign when I'm recorded at conferences and the like--mostly because rights to use material for marketing/commercial purposes are more restricted than the same material used for editorial. Frankly, most events etc. don't bother because the (correct) assumption is that people doing things in public aren't going to suddenly want to get rid of the content.
Also, giving a presentation to thousands of people, whether at a conference or over a broadcast, is at least perceived a lot differently than leaving a comment on a message board, especially if it's a small one or a niche community, so different things are shared.
Suppose one is foolish as an 18-yr-old and posts on a forum most browsed by his friends, "Ha! I just got Lifelock and since I know you can't do anything to me when I have that, my SSN is 999-99-999. Just try to steal my identity!" Suppose the IA saves this statement. The OP would have a copyright interest in it and would be within his rights to point out that the IA has no privileges that entitle it to rehost that content, so please take it down. That is totally fair.
There is no reason that copyright law should only be usable by media conglomerates that mostly use it to stop the spread of free culture and not by private individuals trying to clean up some of their past mistakes.
To play devils advocate against myself, I have made mistakes in the earlier days of the internet, that I am glad the archives failed to keep. I do understand that there is a need for privacy friendly user sites, but I am sceptical about what tools are allowed to actually perform this structure. Right now, the internet is a threat to the power that be, which is why we will see an ever increasing attempt to legislate it into the ground. If we allow government corruption to seep into the internet anymore than it already is, the real concern will be one of censorship and propaganda, and user privacy is less to do with publicly posting things you shouldnt, but more to do with the corporate/government merger and data sharing that is going on around us. Loopholes everywhere for suppressing dissidents.
I expect it's a legal gray area that mostly works in part because most random forum posts are pseudonymous.
Why not? Forum posts are copyrighted at the moment of creation just like any other work, and while one certainly gives a license (implied or not) to the forum, there's no reason why that license would extend to the IA.
So perhaps a new IA that only indexes creative commons licensed sites might be in order?
IANAL, but basically yes. It's one reason why the IA respects robots.txt even retroactively. The reality is that, if something was posted publicly by a copyright holder and intended to be shared, the overwhelming majority of people/entities don't care that it's being archived somewhere but there's no particular exemption for something like the IA.