The bigger, more popular, and ubiquitous it became the more corporate and political powers were going to seek to rule it. Now were are at an age of internet giants with the GDP of small countries, and political elections being swayed by the bovine herds of Facebook and Twitter users (or useds as Stallman calls them!). The internet has come so far from my happy memories of the late 90s.
My prediction is that we will see multiple 'internets'. Whether for political reasons (e.g. China), or commercial (someone like Facebook or Google providing their version of internet to a 3rd world country).
Then of course we have things like dark web. I think many will stop seeing the darkweb as a place of CP and drug dealing, and more of an internet free from regulation.
It's an interesting point in history.
(I'm a bookwork so please share any recommendations on this topic!)
I think it's more likely that most people will forget that the internet was once wide open and will accept the locked down state as normal.
> I think many will stop seeing the darkweb as a place of CP and drug dealing, and more of an internet free from regulation.
You mean Tor, I presume. But Freenet is still around, although it's too risky to use it, except via Tor. And there are other overlay networks. Some basically just use VPN connections (such as tinc).
And they were always about being "free from regulation". It's just that hobbyists, activists and people into recreational drugs and CP were early adopters.
If you haven't read Vernor Vinge's True Names lately, I highly recommend it.
Hi, I'm a Freenet developer of the past ~ 10 years, so I'd like to clarify upon this :)
(The project is still active, there was a release just this week!)
While there technically were indeed lawsuits in the US, the situation is not as black and white as "it's dangerous".
It is an anonymizing peer-to-peer network as well!
What is dangerous under certain circumstances is only one of the three modes to use it:
1) Opennet, where Freenet uses random strangers as peers.
2) Darknet, where you only connect to peers you manually select, e.g. your friends.
3) Opennet with some Darknet peers in addition (I'll call it "mixed mode").
So Opennet allows law enforcement to connect to your Freenet potentially and thus analyze your traffic.
Still, this does not mean that your Freenet will plainly tell its peers what you are downloading!
Traffic is always redirected across a random number of peers, none of which tells the others who requested it - which provides plausible deniability.
All traffic is encrypted, only the recipient can decrypt it.
So you cannot just watch traffic and filter out illegal JPEGs or whatever.
What LEA did then is to come up some math and then claim to deduct from it that there is a certain probability that the illegal downloads were requested by the people they claim it came from.
Their math is known and discussed by the Freenet core team, it may be addressed eventually - but from watching the discussion (not the math) I can say it should be taken with a grain of salt.
It's not absolute proof that the claimed downloaders were in fact the downloaders.
It's just a probabilistic assumption, which may possibly be wrong because the way Freenet works is rather complex (>200 000 LOC).
So as Freenet stores content encrypted on random user's machines (which is the advantage over Tor, Freenet is completely decentralized!), it is imaginable that law enforcement accusses people who did not willingly download it, but just happened to store it.
But: You can use Freenet in Darknet or mixed mode to be reasonably safe:
The more of your peers are not controlled by attackers, the lower the probability that a statistical attack can be conducted.
Further, the said legal cases only happened in the US to my knowledge, and I'd argue that the legal system of that country seems a bit flawed.
Outside of the US you can just run Opennet and probably be at the same risk as some random non-exit Tor node.
You transport traffic which you cannot look into (because its encrypted) and store files which you cannot look into (because they are encrypted), so what's illegal about it anyway?
Further, it should be clarified that this is not a problem specific to Freenet:
ANY network which tries to be anonymous will suffer from the so-called "sybil" attack if it connects to random strangers:
If an attacker runs e.g. 100 000 machines on a network of only 1000 actual users then the probability that a single user only has connections to them is very high.
And anonymization must rely upon redirecting traffic across multiple peers - but it cannot if all peers belong to the attacker.
To my understanding Tor addresses this problem by heuristics, e.g. closely monitoring important, big machines in their network, trying to ensure they are in fact distinct entities - but that is really just guesswork, not hard mathematical security.
If Tor wanted to be truly secure it would have to add a darknet mode as well.
I should have been clear that I was talking about opennet mode. If you want to use Freenet in darknet mode, among people who know each other well, and trust each other, it's at least safer than (say) using a private torrent tracker. I mean, torrent traffic is also encrypted, these days.
It's true that you're relatively safe from adversaries, if you only use darknet mode. But there's always the possibility that one or more of your peers will get busted through some other exploit. And that they cooperate, and become informants.
But in darknet mode, you can only communicate with your peers, and can only access stuff that you and they have uploaded. If you want to communicate with the global opennet, and share stuff with it, at least one of your peers must have opennet peers. And that exposes them, at least, to adversaries.
If they get busted, and cooperate, others in the darknet are now at risk, because an adversary could use their client to probe its peers. They couldn't add other peers to the darknet, however, without some social engineering.
So anyway, it's whatever nodes that peer with the global opennet which are the main risk. And to do that safely, one can use anonymously leased throwaway VPS as gateways to the global opennet. You reach them via Tor. So if they go down, adversaries don't learn anything actionable about the darknet itself.
In recent years, investigators have been using customized clients to serve child porn, and track which peers receive it from them. For IPs in their jurisdiction, they get and execute search warrants.
Although there is arguably plausible deniability, most defendants lack the will and resources to fight. So they typically plea bargain.
Anyway, if you use Tor, they can't find you. But it's not as simple as that, really. Basically, you lease a VPS, working ~anonymously via Tor. You run a Freenet node on the VPS, and access the webGUI as a Tor onion service. They can take down the VPS, if they like, but won't know who was using it.
Traffic is not just obfuscated, it is encrypted. Sure, you see the IP of a peer which transfers stuff across your client - but you do not know what the stuff is as it is encrypted.
So the IP address is worthless unless you figure out a way to guess what the stuff is, and who requested is.
See my other reply in this thread for further details.
As far as "who requested" the stuff, as you say in your other reply, they have some statistical arguments. I agree with the Freenet Project that they're very likely bullshit.
However, if you're facing criminal charges, you'd better have resources for expert testimony to discredit their arguments. And if you don't, accepting a plea bargain may be the best option. Even if you are truly innocent.
But generally, I'd rather avoid having a warrant served, and my stuff impounded. So advertising my IP address as a Freenet node seems like a dumb move.
If the dark web becomes popular though, won't the same people try to regulate it?
They could make .onion link illegal for ISPs to load and kill it overnight.
"People who use the darknet usually are up to no good. This simple realization should be reflected in our legal system."
"I understand why the darknet can be useful in autocratic systems. But in a free and open democracy, in my opinion, there is no legitimate use."
It's super difficult to build a low latency mix-net that covertly works inside an adversarial network.
In addition to the current models over which onion/garlic routing are based upon you would (at least) need to add to the core of your software traffic obfuscation, a series of covert channels, NAT bypass, ...
Tor and other mix networks simplify the problem assuming that there's a portion of the Internet that is free and introducing censorship circumvention mechanisms.
Of course, the assumption is increasingly untrue.
In this case, it helps if the evil US military actually does use Tor :)
480 000 000 000 000 bits / 86400 sec = 5555.56 mbit/s
And you can send as many boxes as you like in parallel.
I fondly remember conversations on AmiNet - an Amiga - dedicated FTN network.
That was all done over PSTN with 9600 - 19200 bps modems. Latency was days. All this didn't preclude massive amounts of collaboration over it.
I'd rather question if lower latency delivers any benefit.
I imagine that in practice the postal service may start to decline your custom somewhere around the quintillion-box mark, or perhaps even before...
What if a billionaire launched a constellation of low earth satellites which provided Internet?
They're not, regardless of medium, the same cops and IP lawyers can get on it and track down people to arrest and sue according to whatever laws are on the books, good or bad.
What if this satellite Internet service is a multinational corporation, with the directly owning entities based in Russia and China? Or perhaps Sweden? Part of a conglomerate under the ultimate control of a corporation on Mars?
I think we'll definitely have crossed some threshold when we have our first extradition from a different gravity well.
Today, if people in Russia or China run the servers there isn't much the EU can do (see: scihub and the US, for a long time piratebay too). You don't need a massive indestructible satellite constellation for that.
I'd also add that a billionaire isn't the criteria you're looking for here even if it was a policy targeting the people running the wire. It's a foreign government with sufficient military power to deter the US from arresting you, and sufficient technological power to set up such a network. Maybe it's a Russian Billionaire who launches them in your hypothetical, but it's the Russian government who provides the security that allows him to do that.
I was imagining containerized server clusters in low earth orbit as well, with the ability to rapidly export the entire state of servers across super high bandwidth laser links. Everything would be done remotely, and the corporations running them would also be in Russia, China, Mars, etc.
(Containerized in the sense of hardware in a shipping container, not Docker, though that would play a role as well.)
The latency can be huge when we have less-that-optimal conditions.
Since the throughput is low and the latency high, it's not streaming-video capabl. A buddy downloads a load of movies to an ssd and drops that off every now and then.
For real-time news I use an actual radio. There are some good news and music stations in the area.
I have an offline copy of the spanish language wikipedia.
Sounds like the 21st century version of Samizdat :)
You want to regain your freedom? Use not-for-profit, decentralized platforms instead. You can use Mastodon  instead of Twitter, PeerTube  instead of YouTube, Aether  instead of reddit, etcetera. Other interesting P2P projects are DAT's Beaker Browser , and ZeroNet . None of those will have problems with Article 13.
EDIT: "Such [content-sharing] services should not include services that have a main purpose other than that of enabling users to upload and share a large amount of copyright-protected content with the purpose of obtaining profit from that activity." This is from page 62 of the document wherein Article 13/17 is to be found.
Article 13 dooms smaller companies and startups, thus further entrenching these big corporations. There was a provision added to Article 13 to protect "small and medium-sized enterprises", but according to the EFF  this "protection" is fatally flawed. It only protects them for 3 years, or until they attain 5 million unique visitors, or until they attain annual revenues (not profits) of €10 million.
That's not to mention that the exceptions for not-for-profit services also has been regarded as vague, which could be problematic.
Europe tries to catch up to the Silicon Valley startup scene. But stuff like this makes it pretty clear that EU is too retarded.
I don't even see them trying to do this...
Can you elaborate why that is?
We're taking about teenagers here, so it's not always clear to them that they cannot use ripped sprites from other games, or music, or whatever.
Basically I can make the uploader responsible for what they upload.
The secondary problem is that my biggest competitor also has a lot of copyrighted material, so I'm already very careful with that not ending up on my platform.
With this new law, anyone can sue me if there might be some sprite on there that they created. If I was my (non-EU) competitor, I would anonymously upload some of my own content to sue the EU company. Basically I'm a sitting duck.
I'm currently working on my platform alone, so implementing a filter is impossible. Even with a big team it would be impossible, since slightly modified sprites are derived works and so also copyrighted.
But if I'm outside of the EU, I can just block that region (not the biggest one anyway, and after the UK leaves, not a single native English speaking country in there).
If I get a competitor from the EU in the far future, I just do the upload & sue trick.
Oi! Ireland and Malta would have a word with you, mate.
> I can just block that region (not the biggest one anyway
Not the biggest, but the richest.
Of course since I'm just a nobody on a forum what do I know.
It depends on what you mean by rich.
From a GDP PPP perspective there are issues on short, medium and long term when compared with other countries.
China is richer than the whole of the EU (incl UK).
US is almost as rich as the EU.
India is 1/2 and Japan 1/4.
*By rich I mean GDP PPP.
Not only does the US have about ~40% of all the millionaires on earth all by itself, its GDP per capita is 77% higher than the EU ($33,700 per capita per the Worldbank 2017 figures; versus $59,700 that year for the US). Its nominal GDP is also about $2 trillion higher, despite having roughly 200 million fewer people.
PPP is a near worthless measurement if you're a business trying to sell goods. It's the absolute last thing you'd rely on to gauge the pricing power in a market for a product or service.
but I agree with you that from a business perspective the US is de facto the place to be.
This kind of behaviour is going to lead us to having two seperate internets.
> Not the biggest one anyway
True, but does it need to be the biggets to be valuable.
> not a single native English speaking country in there
Except there are native english speaking countries in there, and besides, europeans can very often (region dependent) read/write english anyway.
Also, do you just not want to support none english content? What about spanish speaking Americans?
I also think you'll lose many users in other eurasian countries that use an anonymising network and have exit nodes in the EU.
No, EU kind of behavior does, just like China behavior.
> True, but does it need to be the biggets to be valuable.
I have lots of users in US, Australia, New Zealand, various Asian countries, and UK. Focusing on them allows me to skip translations.
> europeans can very often (region dependent) read/write english anyway.
As a European myself (Belgian), I know this very well. The Netherlands and Flanders are probably leading in this. But the bigger countries such as Germany, France, Italy and Spain prefer translated software. Just look at the dubbed movies they watch.
It's a lose situation anyway for me, there is no question about that.
Pretty scummy behaviour :(
Only "smaller companies and startups" whose main purpose is "enabling users to upload and share a large amount of copyright-protected content with the purpose of obtaining profit from that activity". That's what the document says; how it will actually be enforced is still a mystery, of course.
P. S. I have pointed to that "protection" on a previous comment .
Also that link appears to be dead, so I'm not sure what comment you're referring to.
I've vouched for it, hopefully a few other people with enough karma to do so will and it will resurrect itself.
One of my priorities in the next two years is to protect as much of the decentralised Web from the effects of the Copyright Directive, but it's not going to be easy. The large platforms, in their negotiations with the rightsholders who pushed for this directive, will have the explicit intent to turn it into a moat that can limit the growth of competitors, including non-commercial alternatives.
The rightsholders see even the smallest platform as a lawless environment that has no redeeming features, and worse than the now-regulated giants. Without active and co-ordinated lobbying by decentralised Net advocates, they will paint these alternatives as a "new generation of Pirate Bays", just as they did with YouTube and its predecessors.
So, what happened to that? ^
> Apply the law to platforms that “optimise and promote” significant amounts of user-uploaded works and are not small businesses (turnover below €10M and less than 50 employees)
According to: https://juliareda.eu/2018/10/copyright-trilogue-positions/
Upload filters must be installed by everyone except those services which fit all three of the following extremely narrow criteria:
* Available to the public for less than 3 years
* Annual turnover below €10 million
* Fewer than 5 million unique monthly visitors
The "5 million unique monthly visitors" point is concerning too, because that term is not clearly defined.
The entire attitude that I should "regain [my] freedom" seems condescending. I don't want to use a P2P alternative to YouTube or Reddit, because 99% of the content is on Reddit/YouTube.
I'm well aware that YouTube collects and sells my personal data, I just don't care.
The idea that legislation is good because it forcefully restricts my choices (indirectly, by harming YouTube), thus preventing me from harming myself seems to be a form of unneeded parenting/hand-holding/babying that I'm not a fan of.
That's the catholic and lutheran authoritarian mindset that is deep ingrained into the minds of EU politicians and large parts of Europe itself, that's what they mean with "democracy". They don't really trust people and their individuality.
Just check the backgrounds of the politicians who voted in favor, you'll find that most have this religious background and distrust in people and are easily manipulated by others "higher up the chain", like those cultural snobs in Paris.
That might change once everybody gets forced off Reddit/Youtube. The best-case scenario here is suddenly starting to look like revival of the distributed, non-profit internet in Europe. If that's the case, I can live with losing Youtube.
Wouldn't the killer feature of these P2P platforms (admittedly, none of which I've ever used) be to have a 'transparent bridge' to the mainstream platforms? I.e., like SciHub, almost transparently pirate content from their original source? Do any of them have it?
1) The legislation in question has nothing to do with protecting individuals privacy.
2) The solutions you offer are essentially not productized, they are not usable to normal people.
3) There is absolutely nothing wrong with companies making money.
This legislation is not being driven by Google and Facebook, it's being drive by Der Spiegel, Le Figaro, The Times etc..
It's also being driven by scared EU legislators who think that all their surpluses are going to American companies, it's a very weak hand to play, the 'strong hand' would be to have exceptional firms in Europe, doing things there.
If Google were a Germany company, this legislation would not exist. Surely German media firms would still want it, but since the surpluses from the situation would remain in the EU, then legislators would be less assertive about it, to the point wherein I think it would fail.
Instead of this legislation, we need:
1) Some tighter privacy rules that actually do affect G and FB
2) Taxation rules for the 20th century - ironically, this is an EU problem as they have Ireland/Netherlands/Luxembourg as their own loopholes
3) Stronger local entities, particularly in Europe to create a balance, that would lead to less motivation for political interference.
I take issue when people use the word profit to mean some evil, shameful thing. Youtube has amazing content and tools, and I'm sure a lot of their profit is re-invested in the platform. I doubt these other platforms come close in terms of functionality and UX. Peertube site design looks like it's from 2005. I know that might not be indicative of their core features, but first impressions are important, and this does not bode well.
There's a reason mainstream users never flock to these decentralized platforms: they don't have the fit and finish of a commercial venture.
You left out the "by way of selling your personal data, violating your privacy, and having a persuasive (addictive) design in order to glue you to the screen so they can maximize their ad revenue, dismissing any human cost those practices entail" part. I don't associate the word profit with a bad connotation univocally; that's only an assumption on your end.
But yeah, Facebook tries to make Facebook a site you want to visit. Youtube wants you to watch YouTube. Should they try to make sites that aren't engaging?
Maximising ad revenue also seems not terrible for users? A week ago I saw an ad for some pants, and I'm wearing them now. I spent ages walking around town looking for pants I liked. Hopefully next week they start showing me shoes. IMO advertisers and these platforms tend to have incentives pretty closely aligned with their users'.
(Dunno about selling data. I thought that had stopped happening, and I don't like the idea.)
Sure, the internet mammoths of today make their profit tat way, but this legislation is probably going to be around for a very long time. Platforms of the future might find other ways to make a profit. (Or they might not, because legislation of this sort makes it much harder for a new platform to rise and challenge the mammoths)
"Member States shall provide that, in respect of new online content-sharing service providers the services of which have been available to the public in the Union for less than three years and which have an annual turnover below EUR 10 million, calculated in accordance with Commission Recommendation 2003/361/EC 20 , the conditions under the liability regime set out in paragraph 4 are limited to compliance with point (a) of paragraph 4 and to acting expeditiously, upon receiving a sufficiently substantiated notice, to disable access to the notified works or other subject matter or to remove those works or other subject matter from their websites.
Where the average number of monthly unique visitors of such service providers exceeds 5 million, calculated on the basis of the previous calendar year, they shall also demonstrate that they have made best efforts to prevent further uploads of the notified works and other subject matter for which the rightholders have provided relevant and necessary information."
Paragraph 4 says this:
"4. If no authorisation is granted, online content-sharing service providers shall be liable for unauthorised acts of communication to the public, including making available to the public, of copyright-protected works and other subject matter, unless the service providers demonstrate that they have:
(a) made best efforts to obtain an authorisation, and
(b) made, in accordance with high industry standards of professional diligence, best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information; and in any event
(c) acted expeditiously, upon receiving a sufficiently substantiated notice from the rightholders, to disable access to, or to remove from, their websites the notified works or other subject matter, and made best efforts to prevent their future uploads in accordance with point (b)."
Sorry for the wall of text, but I think this is quite illustrative. Anyhow, do you have an example of a small content-sharing service provider that would be affected? I'm sincerely curious. This is a personal opinion, but I don't think any content-sharing platform should profit from copyright infringement; I don't think forums or other kind of communities the main goal of which isn't to profit from that activity would be affected.
That is not a requirement to fall under Article 13! Are you maybe mistaking "copyright-protected material" for "copyright-INFRINGING material"? Every creative text and photo is "copyrighted material", so this covers any for-profit UGC platform.
MEP Reda proposed making the above change in the text, that proposal was rejected. So the broad coverage is intentional.
if those communities are aiming for a break even at best, would those as well be considered "for profit", though?
Even an LLC or INC that loses money is a "for profit" company. Most VC funded startups fall into this category where they lose money each year with the goal of eventually turning a profit.
For now, but where will it stop (or will it)? Another commentor pointed out that even small services running ads to pay for hosting could be considered "for-profit". Maybe not now, but it's just a matter of when. First they came for the platforms run by big corporations...
If those small services' main purpose is "enabling users to upload and share a large amount of copyright-protected content with the purpose of obtaining profit from that activity", then they are turning a profit from copyright infringement, whether it is to pay for their hosting or not, so they will be targeted, as the document establishes. That's my take on it, at least, but I think it is quite clear.
Many of the cases out there involved people sharing on a large scale. Examples like The Pirate Bay or Sci-Hub or Aaron Swartz, which involve distribution of large amounts of content to large numbers of people.
The smaller the platform, the less anyone will care about it, even if it is distributing a little bit of copyrighted content. Small scale copyright violation is so widespread, and the benefits of fighting individual cases of it so small, that there's simply no value to taking it on and they aren't bothering.
ISPs will be forced into doing more of this if piracy becomes large scale decentralized, which it will.
Copyright enforcement is about ambulance chasing. Small time channels, like game streamers, who happen to have captured a game that has a music soundtrack, have received DMCA takedown requests.
What we're witnessing here is a misplaced "I hate big tech, so therefore I support anything I perceive as targeting them" resulting in collateral damage that makes every one else's life harder, benefiting mostly rent-seeking big publishers.
The decentralization-will-fix-it cryptoanarchy workaround is a pipe dream. Every so often people imagine an unbreakable piracy distributed darknet will circumvent laws and make piracy safe and convenient for everyone, but the reality is, as soon as it becomes the dominant form, the powers that be will turn their attention to it, and the attempts to crack down on it will be far far more invasive and surveillance heavy.
Just ask Napster, LimeWire, Scour, Kazaa, Grokster, Madster, and eDonkey2000, all of which were brought down by injunctions.
All of those were commercial outings trying to make money out of their proprietary piracy client software, the open source versions are still around, and even very old networks like ed2k is still up and running. The current 'dominant form' is bittorrent, and from what I can tell it is doing just fine.
Left out of this discussion is simply some Chinese company, like Douyin/Tiktok just hosting a Youtube competitor, and hoisting a giant middle finger to the EU. The EU will have to erect their own great firewall to stop it.
And yes, the networks that survived are small, and not making money, which is the correct outcome for a network built on wide-scale abuse of copyright. Your response backs up my point, about how the media goes after large scale infringers, rather than worrying about small-time offenses.
That the over-policing of copyright will cast a chilling effect on independent media creation, that it will affect fair use and transformative works, and that the EU copyright laws will cause all online providers to err on the side of false positives. If you think automated takedowns, de-monetization, and capricious account bans are bad now, just wait until platforms are put in the untenable position of facing either huge fines for under policing, or lesser punishments for over policing.
I already told you that distributed networks have been taken down by concerted government action. Torrent sites have been shutdown. People have been charged during the Napster-era for hundreds of thousands of $$$ for songs on their hard drive. Here, how does this back up your point: https://www.theguardian.com/technology/2012/sep/11/minnesota...
As I pointed out, my ISP, Comcast, is already deep packet scanning network traffic and automatically flagging what it things are pirate activity.
You continually confuse real piracy, like someone uploading a whole movie or album, duped from pristine original source -- what I'd call bootleg copies, with stuff like a kid uploading a dance video to a backing track and going viral. Do you really think someone singing karaoke or dancing to a 30 year old song means that person should have their video taken down?
Even song covers, some girl or guy practicing singing, and and playing music on their own piano or guitar, gets taken down. I think that's absurd, especially for music decades old that was released before the singer was born. Artists being sued for sampling or chord sequences, against, a travesty. I've a big fan of Kirby Ferguson's _Everything is a Remix_, which points out that some of the biggest complainers of infringement of their work, are in fact, thieves themselves.
If YouTube becomes too hard for Europeans to publish on, because it turns into a hyper-curated nanny state, my point is, people may turn to TikTok, Bilibili, or others which will happily host the same content, but whose government cares little about helping to enforce foreign government ideas about IP. The end result of this law will be that it will be ineffectual in reducing piracy, but will be very effectual in casting a chilling effect on actual indie producers, and make it incredibly hard for competitors to YouTube start up in Europe.
Limit copyright to 14 years, the original duration (28 with renewable). That was the law for the first 180 years of copyright. Given the hyper-speed of internet time, if anything, copyright duration should be SHORTER not the century long disaster it is now. If you limited copyright to a much shorter term, I might be convinced to buy into your overly restrictionist stance, but as it is, lifetime+ copyright + orwellian enforcement mechanisms is a bridge too far.
Also, you do realize that most of the people concerned about losing the most money to copyright infringement are big international media companies and guilds, like Disney, or MPAA, RIAA, GEMA, etc and that you're essentially defending Disney Corp's right, whether you realize it or not, to block Star Wars parodies and fan films. Or GEMA's right to block your daughter's violin cover. Unless you think algorithm filters are going to magically determine 'fair use' between a kid cover, or a fan film, and a true bootleg, the end result is going to be platforms over-filtering.
I'll also point out that Max Schrems, whose strongly campaigns for GPDR and against Google and Facebook, actually is vehemently against Article 13, and backs mine and EFF's position.
But even if it plays a restrictive tune, what if we use /robots.txt to explicitly tell if the website or specific contents can be freely indexed & linked to?
It smells like an opportunity to reboot the Web in a less centralized fashion.
try convincing _anyone_ who isn't already on one of those platforms to switch. It's nice on-paper to say "don't like? don't use" but it's not going to happen.
Well, if that's true, then the big question becomes what counts as for-profit. Do you need to be incorporated? What about a blog that has some ads to pay for server costs? Will Europeans be able to upload to Youtube as long as they turn monetisation off?
If being non-profit is the big way out, then that goes a long way to mitigate the damage from this. Although it still sucks for small content creators who do want to monetize their own creations but lack the resources to create their own platform.
Youtube is the target of this law and as they earn money with your video, they have to comply with European law, if they want to be active in Europe.
Wikipedia probably stands alone as a not-for-profit (as do, incidentally, government-sponsored services - so in the UK, BBC should be fine for any liability, but Sky would be screwed, for instance.)
And then there is the question how "for profit" is defined.
It was always the excuse that 'only the big bad capitalists' will be hurt by this, but its simply not the case and has always been a false premise.
26 March 2019. The day the Internet died.
(at least in Europe)
But of course -- "the Internet interprets censorship as damage and routes around it" -- so what we're likely to see is a massive spike in people streaming video over encrypted tunnels into other countries.
That'd be interesting. It'd render GeoIP rather moot, among other things. I suspect the EU and Member States' response would be either "VPNs are banned" or "no service catering for EU users may talk through a VPN endpoint".
This is a law that works to mainly serve the big copyright holders, and in a second degree, impacts the big tech multinationals (=read US companies) less than the smaller ones.
It makes no sense at all. Especially since all member states will have their own law. "Does our filter comply with Belgium law? Also with Luxemburg? And what about Slovenia?".
It's a big farce, that can only be approved by total morons that don't even bother to listen to people who actually know what they're talking about.
They see overcomplexity not as a problem, but as a source of pride and a major bragging point. It is actually a massive clash of cultures even though they come from the same place as the people they are trying to govern.
The proportionality requirement in the text of Art. 13 is more onerous to larger corporations. If you're a tiny blog with a banner ad or two, you're not getting slapped off the internet for having a comments field, because it isn't proportional to require cost and complexity increases of multiple orders of magnitude to police your comments section. Unless someone comes up with Compliance.ly & Co. which does the work for you at a price-point that is reasonable, in which case we've just opened up a new industry which hopefully results in Content ID going the way of the Dodo.
After some litigation occurs in which the boundaries of proportionality are set, we'll be in a better position to analyze the impact of this law.
Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?
A successful Content filtering as a service (compliance.ly & co. In your example), assuming it gets adopted by all major websites, seems like it would shift the problem to an even bigger gatekeeper than YouTube, how is this a good thing?
In 2013/2014 Ministry of Sound sued Spotify over not removing playlists based on Ministry compilations, created by Spotify’s users. Ministry claimed that its compilations qualified for copyright protection due to the selection and arrangement involved.  
 - https://www.theguardian.com/technology/2014/feb/27/spotify-m...
 - https://www.theguardian.com/technology/2013/sep/04/ministry-...
Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.
The articles, as written, are interesting because they already mention a ton of the balancing considerations. All of those are completely absent in these conversations.
Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.
And that's how the internet will die.
>Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?
If the competitive landscape was the same? Yes. In fact, Spotify's arc is exactly what this law is attempting to encourage. As they grew, they became a quasi licensing clearinghouse instead of another Napster or Limewire. That's the entire point.
>how is this a good thing?
Because you don't end up with 1 compliance service, and you can litigate against the compliance service if they're inappropriately killing your content creation business. As it stands now, if you try to fight YouTube or the content delivery pipeline itself on the basis of their filters, you die. That's not necessarily the case if there's a healthy competitive filter ecosystem. Whether or not we get to that point is another question, though.
The problem is the proportionality requirements are poorly designed. It would be one thing if requirements increased solely with revenue, but increasing with time or user count is purely destructive.
Plenty of small services will hit the time limit before they're big, and then the costs destroy them before they have a chance to be. And the fact that that's likely to happen will keep many people from even trying to begin with.
And user count doesn't mean anything if the profit per user is low. Many side projects have a million users, that doesn't mean it's making any money that could be used to spend on filters -- many of them are lucky to even pay for all of their own hosting costs.
> Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.
That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.
I don't think this is the issue. The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.
Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.
The mental calculus I see here just doesn't take into account how courts work.
>That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.
I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.
Justice doesn't scale linearly, which is a very, very big problem -- but not one that's unique to the Art 11/13 debate.
But that's part of the problem. It means a service you operate today is subject to a law that will be decided on tomorrow. So you either make the conservative choice, which is onerously expensive and may put you out of business immediately, or you risk being the case of first impression where the more cost effective choice you made is decided to be insufficient, and that too puts you out of business -- but only after you've dedicated years of your life to it.
> Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.
Users don't scale linearly either. Things have network effects. Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.
And again, just because you have a lot of users doesn't mean you make a lot of money. Your project may have had a million users for a decade, but if the revenue from those users is only just covering your hosting costs as it is, now you're out of business.
> I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.
Which means that it isn't, because then nobody does that and there is no penalty for continuing to do it in practice. And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.
Moreover, even the existing penalties are quite useless because the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce.
No, it isn't. Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.
>Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.
Yes, and then 95% of those go back down to pre-spike levels of interest. If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.
Just because Napster was once small doesn't mean their business model was going to be exempt from attention forever.
> And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.
That's not simple. Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.
We like to believe there's no Kolgomorov complexity associated with getting justice, but getting justice requires translating reality into consensus at some level of fidelity. That process is EXPENSIVE.
>the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce
Maybe on Youtube that's the case, but that's more of an issue with us having a system of private algorithmic arbitration, which is a seperate issue. The courts are too expensive to follow up on individual claims, and the only alternative is for content holders to sue youtube for big $$$ through content collectives (the threat of which is why we are where we are).
That is separate from the problem that the "new law" created by the court is being imposed ex post facto on actions you've already taken.
It means you don't know what the law actually is yet when you're trying to comply with it. That kind of uncertainty leads people to make overly conservative choices that make beneficial projects uneconomical, or just causes them to give up because it's not worth investing years of your life in something you don't know the courts won't unexpectedly blow apart.
And if you want someone to take input from the EFF et al then why should we wait until it's already in court instead of doing that in the legislature before passing a bad law to begin with?
> Yes, and then 95% of those go back down to pre-spike levels of interest.
But the fact that they did have a million users for twelve months may get them hauled into court.
> If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.
Again, you're assuming that success comes with popularity. If you're losing money on every user you can't make it up on volume.
There are projects operated by individuals with a large number of users that operate at a net loss. If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.
And the projects that actually are successful would have high revenue, so the only projects ensnared by a user count limit but not a revenue limit are the ones that are barely making it as it is.
> Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.
Yes, exactly, so if that process is used then the penalty would need to be sufficient to justify the victim in going through that process.
But now let me ask you this. How is it that we're willing to impose a prior restraint without going through that process but not a penalty for false claims?
Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.
Yes, it sucks, but this is business as normal. The tension between certainty and flexibility in the law is a longstanding one.
You want these elements decided at the court level because these elements change, and legislation needs to be good law for a looooong time, whereas a shitty ruling can be blown up in months (sometimes in days).
>But the fact that they did have a million users for twelve months may get them hauled into court.
If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.
> If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.
Why would they need to implement Content ID...? That's the nuclear option in the field.
Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material? It doesn't.
The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.
And court decisions that make major changes like that are rare, exactly because they result in widespread burdensome changes to existing behavior that would have been less burdensome if what was required had been better specified to begin with.
If you pass a law that requires such a court decision to happen before anybody knows how to comply with the law, what is anyone supposed to do in the meantime?
Especially when many of the questions are obvious, not bothering to answer them is just punting because they know the answers will be problematic.
> If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.
Everything with user generated content is "a platform that shares and promotes other people's copyrighted works" and they're intended to be licensed from the user/creator. That the platform has no good way to know when what the user uploads is unlicensed is the whole problem.
And if they didn't have some way to do that when they were small then they don't have it when they first become big either. If you need a solution before you have a million users then you need a solution before you have a million users -- and then we're imposing the same burden on the little guy as on Google, if the little guy ever hopes to become Google without promptly getting sued into the ground.
I also reiterate that user count is unrelated to resource level. An individual can operate a platform with a million users and make no profit from it, but impose a laborious content filtering requirement and that platform is gone.
That is presumably the sort of thing they're trying to protect with language about non-profits, but this is where the ambiguity bites us again. If an individual operates a forum as a labor of love where the ads break even with the hosting costs, is that non-profit or not? What if some years there is a "profit" of $200/year? An individual who doesn't want to be bankrupted by lawsuits is not going to enjoy rolling the dice there.
> Why would they need to implement Content ID...?
We don't know what they would need.
> Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material?
Are blog comments not copyrighted material?
How is the platform supposed to know what is being shared there without reading it all?
> The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.
The objective of DMCA 1201 wasn't to keep farmers from repairing their tractors.
The issue is the divergence between their stated objective and what they did.
In practice, it will all be up to the judge:
1. Was your AI filter adequate enough to properly filter the content
2. If not, how high can the fine be?
There is 1 easy solution to all of this: incorporate outside of the EU.
1b. Regardless of (1), can you prove you made "best efforts" to acquire licenses for the content that was later found on your platform.
It's not specified who you should be seeking deals with, how you're supposed to know ahead of time what a user will upload, how you're supposed to identify the true rightsholders of an uploaded work, etc.
That criterion must even be fulfilled when you're less than 3 years old, by the way!
That's the case for any piece of legislation.
The test isn't 'if your AI was good enough'. For the majority of people the most important part is: 'is it proportional to even use AI at your size?'
To which the answer is no.
If you're running a stream or youtube channel of self-created content, the cost of moving dramatically exceeds the total cost of legal risk you're eating in staying put.
How does the EU legislation change how that works? It already exists.
Edit: Content ID already covers the requirements of Art. 13 under any reasonable reading of the legislation. Things aren't going to get worse because of the legislation. They'll get worse because of pressure from their content partners and because they refuse to spend on human support. Why spend when you can do nothing instead?
Your speculation doesn't make legal or business sense.
But hey, if you are outside of the EU, no problem. So guess what streamers will do.
This is not rocket science you know. This is just simple cause and consequence.
Stricter filters for EU citizens. And hey, maybe if we are lucky, YouTube decides EU isn't worth the effort anymore and decide to use the block filter.
The concern over data-use at filtering service companies is new to me and interesting but substantially mitigated if they are compliant with GDPR. I haven't seen this argument before, so I'll have to take a look. Thanks!
I'm sure everyone is dreaming of having a "tiny blog"</irony>
Meanwhile in the real world, the European streamers and content creators, who make a living from their content, are looking on how to escape the EU so their content doesn't get filtered out.
I did. I've followed every public draft of the language as its developed.
The article does not do what people are claiming it does. The internet is not dead. Small content creators are not being wiped out. The big tech giants are not creating yet another regulatory moat.
There are plenty of real problems with Article 13 that deserve discussion and elaboration so that when the first cases come out, they get decided properly, but this isn't a nuclear bomb that blows up the net and makes it a corporate-only zone.
You clearly didn't.
From the text itself: "for less than three years and which
have an annual turnover below EUR 10 million"
Do you see the "and" there? This means that ANY business that is older than 3 NEEDS to comply with filters.
I read the text, because it directly impacts my platform. The solution is: start a foreign corporation.
Your comments here, and in your other posts where you think that streamers have "legal" problems, clearly indicate that you have completely no clue what you are talking about.
Small content creators will be filtered out, and small platforms will need to comply to all the different laws of each EU country. This is crazy.
I did. I wrote at length about it in the previous thread, and provided links to the language of the articles as well as the elements that were ignored.
You need to read ALL of the language to understand how the proportionality requirement impacts the scope delimitation requirement you're listing.
If you don't do that, you end up with a broken understanding of how the gears fit together.
The legislation does have holes in it, but they aren't that 'small content creators will be filtered out'. People aren't going to litigate against small content creators in the first place. They're going to get smacked by Content ID, which is already ruining livelihoods, but which is a completely separate issue from the EU legislation.
Its about the implications, how it relates to the status quo online and how the digital economy works. What they're trying to enforce is just irrational and goes against the natural flow of things. They're nuts.
This video sums it up nicely: https://www.youtube.com/watch?v=t7tA3NNKF0Q
I think this sort of reasoning is largely fallacious. Just because people view your stuff doesn't mean that if you're successful in locking it down that they'll then pay to view it.
I feel the media companies know this and that's one reason they demand ever increasing copyright terms - to avoid older content eating in to current profits.
And be definition this can be seen as a loss since the viewing itself is the revenue generator.
I haven't seen any support for the articles which actually shows the effects of the policy will be good, rather than arguments saying "it's meant to be good". Which is a fallacy that affects many politics which later end up having adverse effects.
But ultimately bureaucrats are happy whenever there is an excuse to increase bureaucratic power.
For the particular point you're putting out, to justify the EU policy you have to at least show that 1) those media outlets would receive all that traffic that those FB posts generated if the FB posts didn't exist in the first place, 2) that this outweighs costs from abusing that policy (claims over fair use, e.g. youtube copyright system) and content that simply will not get reshared, even if fair use and linking to the source material, out of fear of triggering the safeguards mechanism
I was just trying to put in perspective WHY the politicians feel the need to do this. It's mostly backlash against Facebook for years of content stealing.
Youtube and itś content ID system are actually what this law wants to introduce everywhere. While not perfect, it's still better than Facebook, which seems to be lawless on copyright.
In fact, it's all about the music industry wanting higher licensing payments from YouTube: At least as much per play as e.g. Apple Music pays. They call the fact that they're not getting that today the "value gap" – THAT'S the undisputed reason/justification for this law (just google the term).
(Facebook, by the way, also has a content filter: https://www.facebook.com/help/publisher/330407020882707)
It's also why China has such lax IP laws. They are more of a manufacturing powerhouse than an IP powerhouse ( for now at least ) so they have little to gain with stringent IP laws. When their IP portfolio increases, you can bet that their government would be all about IP protection.
And going back even further, we had some of the laxest IP laws in the western world during the 1800s because we had so little IP to protect. Which allowed our businesses to take a ton of IP from IP-rich britain and europe.
It's greed and selfishness.
Google could have stopped all this be immediately kicking off all european newspapers from every google service they have and reinstating them only after they fill out and submit the form that they allow google to use their stuff without any pay.
Instead google only threatened to do this and european newspapers thought they have some power.
The only power they have is making and breaking european politicians, hence current mess.
They did exactly that when germany introduced the "Leistungsschutzrecht", which was pushed and lobbied for by all major german publishers. Needless to say they all agreed to offer their snippets for free when Google present them with their options.
We are no content powerhouse precisely because we are so concerned about all these bureaucratic things. Instead of just distributing better content more efficiently, we prefer to make it illegal to be better than the status quo.
Can't speak for all of Europe, but Internet-related legislation here in Germany has been a disaster since the mid-1990s.
Germany also probably has the strongest tech industry in Europe. Or at least as strong as France and the UK, it seems.
It's election year. Since the big publishers are all for the reform, any politicians opposing it must fear for bad press.
Is the way this legislation got through because of nasty lobbying? What if it was brought in to stem the tide of American tech companies destroying more European businesses by hiding taxes and dodging copyrights.
>Europe isn't exactly a content powerhouse.
Europe has plenty of 'content powerhouse' companies. They just don't wear their nationality on their chest when they sell to the US.
Strengthening privacy protection makes the most popular model for sites to pay for content creation and operating costs--selling information about their visitors to advertisers--much less effective.
Maybe as part of that they want to make it more viable for sites to switch to a direct selling of content model?
1: prevent any free news websites from linking to their pay-walled website and paraphrasing/quoting the whole thing (most people won't pay for the original news source when they can read practically the same thing from a free website). Article 11 prevents this without compensation to the original source.
2: prevent any single user who has access to the pay-walled website from posting the entire article onto websites like hacker news and reddit which I see happen all the time (that and outline/archive links). Article 13 prevents this with automated filters that if fail, the news website can just sue the website and get compensated that way.
This law intent is to prevent unlicensed content to be available to European consumer. That will probably mostly work, with the usual caveat and unintended consequences we all know about, here on HN.
A friendly reminder that this was said about TCP/IP. It does not apply to the application layer (WWW), neither in theory nor in practice.
No. The day the ”Upload Other People's Work" Internet died.
> what we're likely to see is a massive spike in people streaming video over encrypted tunnels
Or just creating their own content. Wouldn't that be awesome?
It will either be the end of any kind of user participation on the European internet, or everything that happens has to pass through Google's filter. Neither are good options for internet freedom.
Note that Google's Youtube filter already has a tendency to block people's own content when it resembles content of the big copyright holders. For example: someone playing a piece from Bach on the piano when Sony has also released a recording of that piece from Bach. Youtube will flag that, Sony is fine with that, and small content creators don't have the resources to fight it.
That situation will get a lot worse.
"Sorry, the video you uploaded 'Me playing Beethoven on the piano' contains BEETHOVEN'S 5TH SYMPHONY by BMG-EMI-XYZ Music Corp. You cannot upload this video."
Measures like this only serve Big Content. And badly, in my opinion.
There is consequence for failing to honour substantiated ones.
I've never, ever heard of a single charge being filed under that clause -- but I've heard of tons of instances of DMCA being abused. On this statement, I'd love to be proven wrong!
So basically useless. They claim to be acting "on behalf of the owner of an exclusive right that is allegedly infringed" and they are, even though the allegation is completely without merit.
Reminds me of the Dropbox launch thread here on HN a decade ago where some sysadmin chimes in with "but this is so easy for the layman to do themselves with FTP and [other technologies laypeople have never heard of]" (not an actual quote).
The blogosphere was similar to that, before everyone gave up and went to Facebook.
This law does nothing to change that in any case. Get a (US law) DMCA takedown, and ignore it, job done.
Now that americans are realising that other countries exist, and make laws like the DMCA, maybe they'll stop doing it.
Hasn't stopped big companies from making false claims before. After all they are the ones responding (and likely rejecting) the appeal of the uploader. See: https://arstechnica.com/tech-policy/2018/09/sorry-sony-music...
> And we know that detecting that certain recording via music matching does not work, only checking the strong hash of it would work. Which would be trivial to circumvent by a single bit-flip.
So you're saying that even Google hasn't made upload filters work reliably? Who can if not the company behind Youtube?
They would need to match all EU copyrighted work. There's not even a database of EU copyrighted work. Because our copyright law works differently than in the US. There's no exact OCR or proper fuzzy matching of video or audio possible. Maybe with success rates of 60%. This is too risky for a big content provider. Esp. dealing with an entity who has no idea what they are talking about (the EU parliament).
Yes it's true XYZ Music Corp would only own that performance (as it's Beethoven and the piece is long out of copyright). The problem is, the automatic filter is a fuzzy matcher: it compares the upload against every other performance of Beethoven's 5th it's been programmed to recognise.
Let's say our uploader has been learning from one of those performances. Their performance will sound very similar to another pianist's -- at least to the fuzzy-matcher.
And therein lies the problem: the uploader's piece is clearly copyright to them, but the magic upload filter can't tell the difference.
It's like uploading a silent theatre production (let's say some kind of homage to silent films) and the upload being flagged for violating the copyright in 4'33".
> Article 17/9: Where rightholders request to have access to their specific works or other subject matter disabled or those works or other subject matter removed, they shall duly justify the reasons for their requests. Complaints submitted under the mechanism provided for in the first subparagraph shall be processed without undue delay, and decisions to disable access to or remove uploaded content shall be subject to human review. Member States shall also ensure that out-of-court redress mechanisms are available for the settlement of disputes. Such mechanisms shall enable disputes to be settled impartially and shall not deprive the user of the legal protection afforded by national law, without prejudice to the rights of users to have recourse to efficient judicial remedies. In particular, Member States shall ensure that users have access to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright and related rights.
So this also encourages to appeal in court against the current very opaque content upload policies. Certainly this is not strictly better than the current situation (where you can be arbitrarily banned), but definitely progress compared to the situation today, where platforms just act like they see fit.
> Article 17/7: The cooperation between online content-sharing service providers and rightholders shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation.
So overblocking will be costly as well, if enough suitable laws are signed into effect and people start complaining. And this really puts large scale commercial (remember non-profits are exempt) sites in a though spot: they either share revenue with content-creators/their organisations (which are mostly s*, but could be changed...) or they employ even more moderators (remember the small paragraph, where banning is to be done by humans ;)) – which all severly limits the current exploitation of the internet as a big chunk of empty space, where the strongest strongman is going to grab the biggest slice and employs an army of user-slaves.
> Article 17/10: For the purpose of the stakeholder dialogues, users' organisations shall have access to adequate information from online content-sharing service providers on the functioning of their practices with regard to paragraph 4.
I guess already today a lot of people would like to know, how Content-ID blocks their content, but Google can't and won't say (because it will show their dirty secrets...).
=> IMO: all in all, for the average person, the internet might develop back to where it was 20 years ago with select content-providers and quite a large proportion of actual people hosting fun stuff (and moderating their own boards...). If people are as IT-literate as they claim to be (although I doubt that for the large percentage of fortnite-playing #saveyourinternet-people) we might as well enter a real golden age of the internet.
You call your lawyer and ask them to sue (as an example) Google.
I expect the response would be something to the effect of "are you mad, rich or both? Because this is going to take a long time and be very expensive."
Just because you could doesn't mean it's feasible from a financial point of view.
A real problem would be the usually long wait.
However, taking into account several more circumstances, either side might not be keen on a court case, and thus provide to avoid it. That hinges on morals and technical details.
The problem with copyright's blurry edges around the originality threshold hasn't changed at least. The Olympics organisation is famous for suing, and loosing often enough, over its trademarks, for example.
> take on a major media company in court
In court or outside? And why the media companies? Laws can be repealed by supreme courts on constitutional grounds. That's an even bigger judicial hurdle to consider. If lobbying or legislative orders are involved, it would be a superset of the problem, as the court is to an extend bound by the lawgivers interpretation of the law, disregarding any side effects that are implementation specific. That's the undefined behaviour of the law. The service nulled all your bits after you passed ownership? The content wasn't registered initially and you assumed it was licensed to null? Ohohoho, none of those side-effects were mandated.
So yes, its still an invalid flag, but if you want your video up again, you have to sue somebody who is probably in another country
Therefore every platform provider visible in the EU (like Wikipedia, Facebook, Youtube, every blog, newspaper comment sections, ...) needs to stop accepting user content, because they cannot guarantee that copyright violations will not occur. They cannot be filtered and not detected. Think e.g. of song texts in images. Will you OCR every image for a work? There does exist a foolproof method to bypass AI, it's called captcha. Even if you install comment or upload submission queues with manual labor ("manual filtering"), you cannot guarantee copyright violations, only courts can do that.
The politicians might have thought of an GEMA-like index to store hashes of protected content in some form or another, which could be distributed to content certain providers, but this doesn't affect the law, which is much broader and not fulfillable. Thus Web 2.0 is dead.
If I would be Facebook I would rather ignore said new laws and go to court over it. The existing framework is good enough, the best way to handle copyright violations.
Even better still: there's a song which consists of 4 minutes and 33 seconds of silence. That's it - silence.
"Your latest video upload contains 5 seconds of stunned silence, which has been identified as an extract of 4'33". This extract is copyrighted. Your video has been deleted."
Just having a urinal doesn't infringe on Duchamp's "Fountain", not even if it's the same model, only if it is presented as artwork does it become a copy of Duchamp's "work".
True, for 4'33'' there is a simple rule that they probably follow - ignore silence :). But for Fountain (if it ever came up) it's hard to imagine that the difference between a protected copy and a non-protected similar image could really be automatically discovered.
But the filter doesn't know about context, it just correlates two images... and you get "Comparison with copyrighted work 'Fountain', 75% match".
75% > 0%, so the filter says "non".
> decisions to disable access to or remove uploaded content shall be subject to human review
intended to handle those cases? I'm not saying that it will be adequate.
People on YouTube are creating content, lots of it. Will they still create it when some filter keeps blocking them?
If you want pirated stuff, just download torrents. They won't disappear with this new law.
The only thing that will appear are filters.
- use of unlicensed samples in music. Goodbye, Soundcloud rap and EDM music scenes!
- use of images and video clips in memes. Goodbye, Tumblr and Reddit!
Until Disney/Comcast/Weyland-Yutani decides that their own your original content. Or the content-id'ing algorithm generates a false positive. Just think a little bit about how all of this will be implemented.
Will they analyze each video if it is a illegal or legal one, checking everything... or just implement a simple, fast and cheap filter that will block most of the content, with no way to appeal the ruling, just like youtube is doing now...
That way they need to shamefully roll back this law, and we're sure they don't try to pull off such a farce in the (near) future.