This is a law that works to mainly serve the big copyright holders, and in a second degree, impacts the big tech multinationals (=read US companies) less than the smaller ones.
It makes no sense at all. Especially since all member states will have their own law. "Does our filter comply with Belgium law? Also with Luxemburg? And what about Slovenia?".
It's a big farce, that can only be approved by total morons that don't even bother to listen to people who actually know what they're talking about.
They see overcomplexity not as a problem, but as a source of pride and a major bragging point. It is actually a massive clash of cultures even though they come from the same place as the people they are trying to govern.
The proportionality requirement in the text of Art. 13 is more onerous to larger corporations. If you're a tiny blog with a banner ad or two, you're not getting slapped off the internet for having a comments field, because it isn't proportional to require cost and complexity increases of multiple orders of magnitude to police your comments section. Unless someone comes up with Compliance.ly & Co. which does the work for you at a price-point that is reasonable, in which case we've just opened up a new industry which hopefully results in Content ID going the way of the Dodo.
After some litigation occurs in which the boundaries of proportionality are set, we'll be in a better position to analyze the impact of this law.
Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?
A successful Content filtering as a service (compliance.ly & co. In your example), assuming it gets adopted by all major websites, seems like it would shift the problem to an even bigger gatekeeper than YouTube, how is this a good thing?
In 2013/2014 Ministry of Sound sued Spotify over not removing playlists based on Ministry compilations, created by Spotify’s users. Ministry claimed that its compilations qualified for copyright protection due to the selection and arrangement involved.  
 - https://www.theguardian.com/technology/2014/feb/27/spotify-m...
 - https://www.theguardian.com/technology/2013/sep/04/ministry-...
Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.
The articles, as written, are interesting because they already mention a ton of the balancing considerations. All of those are completely absent in these conversations.
Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.
And that's how the internet will die.
>Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?
If the competitive landscape was the same? Yes. In fact, Spotify's arc is exactly what this law is attempting to encourage. As they grew, they became a quasi licensing clearinghouse instead of another Napster or Limewire. That's the entire point.
>how is this a good thing?
Because you don't end up with 1 compliance service, and you can litigate against the compliance service if they're inappropriately killing your content creation business. As it stands now, if you try to fight YouTube or the content delivery pipeline itself on the basis of their filters, you die. That's not necessarily the case if there's a healthy competitive filter ecosystem. Whether or not we get to that point is another question, though.
The problem is the proportionality requirements are poorly designed. It would be one thing if requirements increased solely with revenue, but increasing with time or user count is purely destructive.
Plenty of small services will hit the time limit before they're big, and then the costs destroy them before they have a chance to be. And the fact that that's likely to happen will keep many people from even trying to begin with.
And user count doesn't mean anything if the profit per user is low. Many side projects have a million users, that doesn't mean it's making any money that could be used to spend on filters -- many of them are lucky to even pay for all of their own hosting costs.
> Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.
That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.
I don't think this is the issue. The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.
Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.
The mental calculus I see here just doesn't take into account how courts work.
>That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.
I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.
Justice doesn't scale linearly, which is a very, very big problem -- but not one that's unique to the Art 11/13 debate.
But that's part of the problem. It means a service you operate today is subject to a law that will be decided on tomorrow. So you either make the conservative choice, which is onerously expensive and may put you out of business immediately, or you risk being the case of first impression where the more cost effective choice you made is decided to be insufficient, and that too puts you out of business -- but only after you've dedicated years of your life to it.
> Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.
Users don't scale linearly either. Things have network effects. Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.
And again, just because you have a lot of users doesn't mean you make a lot of money. Your project may have had a million users for a decade, but if the revenue from those users is only just covering your hosting costs as it is, now you're out of business.
> I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.
Which means that it isn't, because then nobody does that and there is no penalty for continuing to do it in practice. And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.
Moreover, even the existing penalties are quite useless because the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce.
No, it isn't. Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.
>Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.
Yes, and then 95% of those go back down to pre-spike levels of interest. If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.
Just because Napster was once small doesn't mean their business model was going to be exempt from attention forever.
> And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.
That's not simple. Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.
We like to believe there's no Kolgomorov complexity associated with getting justice, but getting justice requires translating reality into consensus at some level of fidelity. That process is EXPENSIVE.
>the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce
Maybe on Youtube that's the case, but that's more of an issue with us having a system of private algorithmic arbitration, which is a seperate issue. The courts are too expensive to follow up on individual claims, and the only alternative is for content holders to sue youtube for big $$$ through content collectives (the threat of which is why we are where we are).
That is separate from the problem that the "new law" created by the court is being imposed ex post facto on actions you've already taken.
It means you don't know what the law actually is yet when you're trying to comply with it. That kind of uncertainty leads people to make overly conservative choices that make beneficial projects uneconomical, or just causes them to give up because it's not worth investing years of your life in something you don't know the courts won't unexpectedly blow apart.
And if you want someone to take input from the EFF et al then why should we wait until it's already in court instead of doing that in the legislature before passing a bad law to begin with?
> Yes, and then 95% of those go back down to pre-spike levels of interest.
But the fact that they did have a million users for twelve months may get them hauled into court.
> If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.
Again, you're assuming that success comes with popularity. If you're losing money on every user you can't make it up on volume.
There are projects operated by individuals with a large number of users that operate at a net loss. If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.
And the projects that actually are successful would have high revenue, so the only projects ensnared by a user count limit but not a revenue limit are the ones that are barely making it as it is.
> Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.
Yes, exactly, so if that process is used then the penalty would need to be sufficient to justify the victim in going through that process.
But now let me ask you this. How is it that we're willing to impose a prior restraint without going through that process but not a penalty for false claims?
Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.
Yes, it sucks, but this is business as normal. The tension between certainty and flexibility in the law is a longstanding one.
You want these elements decided at the court level because these elements change, and legislation needs to be good law for a looooong time, whereas a shitty ruling can be blown up in months (sometimes in days).
>But the fact that they did have a million users for twelve months may get them hauled into court.
If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.
> If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.
Why would they need to implement Content ID...? That's the nuclear option in the field.
Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material? It doesn't.
The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.
And court decisions that make major changes like that are rare, exactly because they result in widespread burdensome changes to existing behavior that would have been less burdensome if what was required had been better specified to begin with.
If you pass a law that requires such a court decision to happen before anybody knows how to comply with the law, what is anyone supposed to do in the meantime?
Especially when many of the questions are obvious, not bothering to answer them is just punting because they know the answers will be problematic.
> If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.
Everything with user generated content is "a platform that shares and promotes other people's copyrighted works" and they're intended to be licensed from the user/creator. That the platform has no good way to know when what the user uploads is unlicensed is the whole problem.
And if they didn't have some way to do that when they were small then they don't have it when they first become big either. If you need a solution before you have a million users then you need a solution before you have a million users -- and then we're imposing the same burden on the little guy as on Google, if the little guy ever hopes to become Google without promptly getting sued into the ground.
I also reiterate that user count is unrelated to resource level. An individual can operate a platform with a million users and make no profit from it, but impose a laborious content filtering requirement and that platform is gone.
That is presumably the sort of thing they're trying to protect with language about non-profits, but this is where the ambiguity bites us again. If an individual operates a forum as a labor of love where the ads break even with the hosting costs, is that non-profit or not? What if some years there is a "profit" of $200/year? An individual who doesn't want to be bankrupted by lawsuits is not going to enjoy rolling the dice there.
> Why would they need to implement Content ID...?
We don't know what they would need.
> Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material?
Are blog comments not copyrighted material?
How is the platform supposed to know what is being shared there without reading it all?
> The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.
The objective of DMCA 1201 wasn't to keep farmers from repairing their tractors.
The issue is the divergence between their stated objective and what they did.
In practice, it will all be up to the judge:
1. Was your AI filter adequate enough to properly filter the content
2. If not, how high can the fine be?
There is 1 easy solution to all of this: incorporate outside of the EU.
1b. Regardless of (1), can you prove you made "best efforts" to acquire licenses for the content that was later found on your platform.
It's not specified who you should be seeking deals with, how you're supposed to know ahead of time what a user will upload, how you're supposed to identify the true rightsholders of an uploaded work, etc.
That criterion must even be fulfilled when you're less than 3 years old, by the way!
That's the case for any piece of legislation.
The test isn't 'if your AI was good enough'. For the majority of people the most important part is: 'is it proportional to even use AI at your size?'
To which the answer is no.
If you're running a stream or youtube channel of self-created content, the cost of moving dramatically exceeds the total cost of legal risk you're eating in staying put.
How does the EU legislation change how that works? It already exists.
Edit: Content ID already covers the requirements of Art. 13 under any reasonable reading of the legislation. Things aren't going to get worse because of the legislation. They'll get worse because of pressure from their content partners and because they refuse to spend on human support. Why spend when you can do nothing instead?
Your speculation doesn't make legal or business sense.
But hey, if you are outside of the EU, no problem. So guess what streamers will do.
This is not rocket science you know. This is just simple cause and consequence.
Stricter filters for EU citizens. And hey, maybe if we are lucky, YouTube decides EU isn't worth the effort anymore and decide to use the block filter.
The concern over data-use at filtering service companies is new to me and interesting but substantially mitigated if they are compliant with GDPR. I haven't seen this argument before, so I'll have to take a look. Thanks!
I'm sure everyone is dreaming of having a "tiny blog"</irony>
Meanwhile in the real world, the European streamers and content creators, who make a living from their content, are looking on how to escape the EU so their content doesn't get filtered out.
I did. I've followed every public draft of the language as its developed.
The article does not do what people are claiming it does. The internet is not dead. Small content creators are not being wiped out. The big tech giants are not creating yet another regulatory moat.
There are plenty of real problems with Article 13 that deserve discussion and elaboration so that when the first cases come out, they get decided properly, but this isn't a nuclear bomb that blows up the net and makes it a corporate-only zone.
You clearly didn't.
From the text itself: "for less than three years and which
have an annual turnover below EUR 10 million"
Do you see the "and" there? This means that ANY business that is older than 3 NEEDS to comply with filters.
I read the text, because it directly impacts my platform. The solution is: start a foreign corporation.
Your comments here, and in your other posts where you think that streamers have "legal" problems, clearly indicate that you have completely no clue what you are talking about.
Small content creators will be filtered out, and small platforms will need to comply to all the different laws of each EU country. This is crazy.
I did. I wrote at length about it in the previous thread, and provided links to the language of the articles as well as the elements that were ignored.
You need to read ALL of the language to understand how the proportionality requirement impacts the scope delimitation requirement you're listing.
If you don't do that, you end up with a broken understanding of how the gears fit together.
The legislation does have holes in it, but they aren't that 'small content creators will be filtered out'. People aren't going to litigate against small content creators in the first place. They're going to get smacked by Content ID, which is already ruining livelihoods, but which is a completely separate issue from the EU legislation.
Its about the implications, how it relates to the status quo online and how the digital economy works. What they're trying to enforce is just irrational and goes against the natural flow of things. They're nuts.
This video sums it up nicely: https://www.youtube.com/watch?v=t7tA3NNKF0Q
I think this sort of reasoning is largely fallacious. Just because people view your stuff doesn't mean that if you're successful in locking it down that they'll then pay to view it.
I feel the media companies know this and that's one reason they demand ever increasing copyright terms - to avoid older content eating in to current profits.
And be definition this can be seen as a loss since the viewing itself is the revenue generator.
I haven't seen any support for the articles which actually shows the effects of the policy will be good, rather than arguments saying "it's meant to be good". Which is a fallacy that affects many politics which later end up having adverse effects.
But ultimately bureaucrats are happy whenever there is an excuse to increase bureaucratic power.
For the particular point you're putting out, to justify the EU policy you have to at least show that 1) those media outlets would receive all that traffic that those FB posts generated if the FB posts didn't exist in the first place, 2) that this outweighs costs from abusing that policy (claims over fair use, e.g. youtube copyright system) and content that simply will not get reshared, even if fair use and linking to the source material, out of fear of triggering the safeguards mechanism
I was just trying to put in perspective WHY the politicians feel the need to do this. It's mostly backlash against Facebook for years of content stealing.
Youtube and itś content ID system are actually what this law wants to introduce everywhere. While not perfect, it's still better than Facebook, which seems to be lawless on copyright.
In fact, it's all about the music industry wanting higher licensing payments from YouTube: At least as much per play as e.g. Apple Music pays. They call the fact that they're not getting that today the "value gap" – THAT'S the undisputed reason/justification for this law (just google the term).
(Facebook, by the way, also has a content filter: https://www.facebook.com/help/publisher/330407020882707)
It's also why China has such lax IP laws. They are more of a manufacturing powerhouse than an IP powerhouse ( for now at least ) so they have little to gain with stringent IP laws. When their IP portfolio increases, you can bet that their government would be all about IP protection.
And going back even further, we had some of the laxest IP laws in the western world during the 1800s because we had so little IP to protect. Which allowed our businesses to take a ton of IP from IP-rich britain and europe.
It's greed and selfishness.
Google could have stopped all this be immediately kicking off all european newspapers from every google service they have and reinstating them only after they fill out and submit the form that they allow google to use their stuff without any pay.
Instead google only threatened to do this and european newspapers thought they have some power.
The only power they have is making and breaking european politicians, hence current mess.
They did exactly that when germany introduced the "Leistungsschutzrecht", which was pushed and lobbied for by all major german publishers. Needless to say they all agreed to offer their snippets for free when Google present them with their options.
We are no content powerhouse precisely because we are so concerned about all these bureaucratic things. Instead of just distributing better content more efficiently, we prefer to make it illegal to be better than the status quo.
Can't speak for all of Europe, but Internet-related legislation here in Germany has been a disaster since the mid-1990s.
Germany also probably has the strongest tech industry in Europe. Or at least as strong as France and the UK, it seems.
It's election year. Since the big publishers are all for the reform, any politicians opposing it must fear for bad press.
Is the way this legislation got through because of nasty lobbying? What if it was brought in to stem the tide of American tech companies destroying more European businesses by hiding taxes and dodging copyrights.
>Europe isn't exactly a content powerhouse.
Europe has plenty of 'content powerhouse' companies. They just don't wear their nationality on their chest when they sell to the US.
Strengthening privacy protection makes the most popular model for sites to pay for content creation and operating costs--selling information about their visitors to advertisers--much less effective.
Maybe as part of that they want to make it more viable for sites to switch to a direct selling of content model?
1: prevent any free news websites from linking to their pay-walled website and paraphrasing/quoting the whole thing (most people won't pay for the original news source when they can read practically the same thing from a free website). Article 11 prevents this without compensation to the original source.
2: prevent any single user who has access to the pay-walled website from posting the entire article onto websites like hacker news and reddit which I see happen all the time (that and outline/archive links). Article 13 prevents this with automated filters that if fail, the news website can just sue the website and get compensated that way.
This law intent is to prevent unlicensed content to be available to European consumer. That will probably mostly work, with the usual caveat and unintended consequences we all know about, here on HN.