Here is my stack:
1. NZB downloader - NZBGet - http://nzbget.net/
2. TV Shows - Sonarr - https://sonarr.tv/
3. Films - CouchPotato - https://couchpota.to/
4. NZB Search - NZBHydra - https://github.com/theotherp/nzbhydra (I contribute to this project)
5. Plex - https://www.plex.tv/
Add Torrent support:
6. Client - rtorrent - https://rakshasa.github.io/rtorrent/
7. Web Interface - ruTorrent (warning: PHP) - https://github.com/Novik/ruTorrent
8. Proxy private trackers to XML-RPC - Jackett - https://github.com/Jackett/Jackett
9. Cloud Torrent - web interface - https://github.com/jpillora/cloud-torrent
10. Plex stats, analytics and user management - PlexPy - https://github.com/JonnyWong16/plexpy
11. Plex request - allow users to request content - https://github.com/ngovil21/PlexRequestChannel.bundle
Hardware is a pair of HP Proliant Gen8 microservers, Ubuntu 14, Docker, nginx and LetsEncrypt. There is no real easy way to set this all up, you have to do each part of the stack yourself (a docker-compose file would go a long way to simplifying it)
 I spend over $200 a month on content subscriptions so I don't feel bad about utilizing the conveniance of NZB downloads + Plex
I would highly recommend checking out Radarr (https://github.com/Radarr/Radarr) to replace CouchPotato as I find CouchPotato a very inconsistent experience, and sometimes I am outright confused as to what it is doing (seemingly not snatching the movies I want it to).
Radarr is actually a fork of Sonarr, so you can expect a pretty good experience right from the start and the development has been very active since it was announced around a month ago. The project only being a month old is the only caveat as I am still uncovering minor bugs, but the devs are quick to fix.
What do you spend $200/month on?
I thought I was doing well running flexget on Ubuntu on a crappy old netbook.
HTPC manager might be worth a look too.
 Setup guide: http://www.htpcbeginner.com/blueprint-flexget-setup-guide/
Unfortunately you get mixed results in terms of how things are labeled on public trackers and I haven't had a private tracker account since what.cd died.
I'll have to try out a few of these torrent search tools and see how it works out, but most private trackers didn't want to just take cash and seemed to want you to jump through hoops instead, so unless I can buy accounts via other means I'll probably just stick to public.
I think most private trackers are trying to avoid developing a reputation like that of IPTorrents , which is notorious for allowing people to "donate" for accounts.
Most private trackers also explicitly forbid members from selling invites for money, and, more recently, giving them to strangers.
The higher end trackers are communities and they typically want people who'll fit in. One way to do this (and the one that's employed) is to only let people in if they're vouched for by someone who's already proved themself (get an invite from an existing member) or if they've proved themself on another tracker (get an invite from the tracker itself through the invite forum on another).
Lowering the barrier to entry by letting people simply buy their way in results in the tracker as a whole being degraded. This can be seen through the recent replacements for What.CD. One of the new trackers chose to heavily limit membership and focus on a small group of prolific uploaders that had proven themselves somehow while the other chose to have pretty much open signups. The pickier tracker, despite far more stringent entry requirements and a tiny fraction of the user count, currently has the larger catalogue and higher ratio of FLAC.
Also, my limited experience with pay-to-enter trackers has been that they're pretty much on par with public trackers. Perhaps even lower quality, definitely not close to the What.CDs of the world.
I'm just looking for more 2160p really, but so far no trackers I've found, public or private have satisfied that. Usenet has actually given the best result so far.
Unfortunately, community is what leads to content. There's an insane amount of stuff that's available on private torrent trackers that hasn't ever touched Usenet and the kinds of people with access to it aren't the kinds of people to put it on Usenet.
Having to click in to each link, then in again to the stream, then load it up, then back out, and that process again for the next episode. Just one example but there are lots of things like that.
I rewrote this after an initial version that used Redis as a queue to process. Asking end users to install Redis was a step too far I think, so I stepped back and thought about how to solve this without external dependencies.
I ended up with a simple Elixir and Erlang queue implementation. It works much simpler, just as fast, and no freaky deps. GenServer starts up a worker for each crawler, and it schedules work by itself. It's really strange to use a language so complete, it kind of feels like cheating. Programming in easy-mode.
Pull requests very welcome to the crawler folder, it's super easy to write a crawler for your favorite site!
It'll make you a ~100Mb linux distro that will boot straight into your app in ~10 seconds on a raspberri pi.
There's a hex.pm package called "Amnesia" that wraps Mnesia in a more Elixiry format.
These sites have sitemaps:
Enough with the hype: "high performance" What??
Your program is not concurrent: you have one process per website and go through URLs one at a time.
You don't even use Bloom filters.
In this scenario, the source of truth is the location where your users are acquiring their .torrent files.
>For the present purpose that may not be ideal because this system wouldn't be decentralized, and one institution (or a small set of institutions) would have all control. Ideally, there would be some consensus mechanism that determines which torrents are bad and shouldn't be distributed.
Trackers are the centralized alternative to using the Mainline BitTorrent DHT. The performance advantage significant enough they're still in common use after the ubiquity of DHT support in client software. You could build a tracker that serves peer data that it has cached from crawling the DHT, which is an idea I don't think has been done yet. As for a consensus mechanism, that should probably be done out of band and client software could be configured to query it and react appropriately.
>Also, what happens when the key for singing torrents is stolen? In the case of a company using such a system, that situation can be handled but in a more decentralized system with an emphasis on long-term storage the loss of a key may be a bigger problem.
With the JWT system, you can rotate keys, but it will break clients announcing with the old key. Users would have to manually update all their torrents to participate in the swarm again. This is a shame, but there is no official signing scheme in BitTorrent. If the JWT system as chihaya has implemented gathered traction, it could be updated such that clients could actually fetch JWTs regularly from an auth endpoint and then it'd be possible to handle rotation the same way as most OIDC implementations.
I can understand languages not self-hosting their compilers, but I'd at least like it to self-host its toolkits---bundlers, packers, preprocessors, etc.
Gpl violation is a violation where the person didn't share information.
Torrentting 'illegally' is the act of sharing information.
If you're all for sharing an free flow of information.. I don't see how these two conflict I my ethical point of view.
Its a matter of preventing the economic systems that support sound engineers, voice actors, grips, actors, and makeup artists from earning enough of a living to produce art.
Consider what has happened to journalism since smaller papers lost the ability to support professionals.
Their property might be in the form of information, sure.
But one product being more widely distributed does not make any ethical difference. The fundamental is that the information/property has been stolen in both cases.
People break the GPL in order to steal from the public. People break copyright in order to steal for the public.
With material theft, the person that is stolen from looses access to what is stolen. This is true in the case of a GPL violation. But not in the case of classic copyright violation.
You might argue that copyright violation prevents extracting profit - but it doesn't mean the victim loses access to their property.
Edit: I'm not trying to be snarky or overtly pedantic, and the list of supported sites does suggest that this might be intended to facilitate piratism. Still, this is interesting and potentially useful technology, and seeing it merely as a tool for copyright infringement misses an important point.
If the content of the sites are the problem, well why would Google get a free pass on that point too?
The goal of the application itself, just like Google's, doesn't have anything to do with copyright violation, it just can be used that way.
Edit: Removed an irrelevant addendum.
If the image weren't a suitable representation, then the author wouldn't have chosen it. You're trying to downplay that these sorts of decisions are intentional.
Gun ownership is not a crime. But the gun is shown in a scenario where a crime is about to be committed.
At first I didn't have any issue but this analogy seems to have convinced me it's a bad image to use.
But I'm sure the maintainer will be happy to modify the picture with a search for "big buck bunny" if you ask nicely.
Some of the people decrying GPL violation also decry this kind of copyright infringement.
But, also, GPL has a clear purpose and limits itself to that purpose.
The traditional rights protection organisations have sought to extend copyright laws in weird ways.
In the UK "format shifting" (eg ripping a cd you bought to mp3) has gone in and out of law. I think it's currently unlawful to rip a CD that you have bought to mp3.
Backing up my purchased media should not be a copyright infringement, and I have no problem breaking the law to do so.
And why are you bringing up backups? What does that have to do with torrenting?
The industry has lobbied so hard that the good will of the younger generation is gone. (It gets better now with Netflix and offers like this). But I don't feel that somebody broke the law when he downloaded music/videos and I am part of this younger generation that grew up like this.
Instead, when I buy their albums on Bandcamp.com, that particular artist gets 85% of what I'm paying! Also I get to own an actual copy instead of the ridiculous deal streaming offers. Storage is so cheap these days! But don't tell the consumer or we can't sell them streaming services on one end and data bundles on the other.
Unless you listen almost exclusively to modern commercial "pop" music--though calling it "popular" has been a misleading term ever since people have been able to listen to music via channels that the "pop" industry turns a blind eye to. Still it's a valid choice, of course, if you enjoy that music! And honestly quite a lot of "pop" music is pretty good. And I presume that the people who produce Justin Bieber's songs are reasonably well-compensated for their work, but not fairly if you look at where the money is going. So you're not really paying to support the content producers here either, a lot of your money is also being spent on unnecessary infrastructure designed to keep out the other content producers and to shove the music into the ears of people that don't even really enjoy it.
Someone elsewhere in this thread said to look at the news agencies and journalism doing badly and dropping in quality. The battle here is over advertising versus adblockers. Well here's some news for you: They're supposed to be a mirror of society, a big factor is living in denial of the horror that it can't possibly really be that bad. Another one to think about for a bit: All those poor blocked ad networks, who do you think profits most from those ads? The quality journalists or the listicle clickbait writers filling the web with attention-slurping crap?
And at least for the pop-music industry I can kind of understand why the big labels and rightsholders are very quiet about the unfair distribution and very vocal against anything that challenges it (because they profit very much from the listicle-equivalent of music). But for news agencies I think it's strange or just plain stupid to cry about adblockers. People running adblockers hurt the listicle clickbait writers way more than they hurt quality journalists, like disproportionally so. So it's killing off some dead weight competition. On the income side, news agencies, even smaller ones, have sufficient brand identity that they don't need a 3rd-party ad network, they can 1st-party direct sell their ad-space to an advertiser. Adblockers don't tend to block 1st-party ads (and if they do, THAT is something I'd be willing to add an exception for). Cuts out the middle-men, who also royally screw the incentives of being intrusive or not. Can you imagine a direct sale of ad-space going like: ".. and could you please cut up the article in four pages so that people have to click through, so that they load up my ad more often?" Of course not, it hurts both the advertiser as well as the publisher. But add in a middle-man, with a sufficiently clever cost-per-impression scheme, and suddenly the incentives are against almost everybody else: publisher, advertiser and consumer.
TLDR; The idea of copyright is badly broken, the industry doesn't understand this yet, advertising is not the answer, the current state of affairs is suboptimal for just about everybody.
"This war must end. It is time we recognize that we can't kill this creativity. We can only criminalize it. We can't stop our kids from using these tools to create, or make them passive. We can only drive it underground, or make them "pirates." And the question we as a society must focus on is whether this is any good. Our kids live in an age of prohibition, where more and more of what seems to them to be ordinary behavior is against the law. They recognize it as against the law. They see themselves as "criminals." They begin to get used to the idea.
That recognition is corrosive. It is corrupting of the very idea of the rule of law. And when we reckon the cost of this corruption, any losses of the content industry pale in comparison."
Possibly of interest, Lessig lost a US Supreme Court case regarding the constitutionality of the a copyright extension act in a split decision 14 years ago.
He authored the Creative Commons license and is currently a possible candidate for the Supreme Court himself. Note that he's not arguing for copyright infringement, but for a reform of copyright law.
If someone truly believed in the moral aims of the GPL then they probably wouldn't pirate media either. That sort of pure and consistent philosophy is an increasingly rare and admirable one.
The FSF believes that all software should be free. The GPL was created to attempt to establish such freedom by leveraging the copyright system. If the copyright system were to be abolished right now worldwide, the GPL would be obsolete, since all software would be free.
Piracy is thus not antagonistic toward the goals of the GPL; rather, it's complementary. Piracy does not reduce access to information, and neither does the GPL; thus, neither are at odds with the other. They're simply different implementations of the same goal: one implementation ignores copyright entirely, while the other interpretation uses copyright against itself.
Richard Stallman and the FSF would encourage you to not engage in restricted media if you don't believe in it. . .not disregard the laws that keep the media restricted.
Piracy and the GPL are NOT implementations of the same goal. The GPL grants a framework for sharing and encourages others to do the same. . . Piracy is availing yourself of something against the wishes of its author.
To say that the GPL and piracy implement the same goal is to suggest that sharing your home with a friend is the same thing as having a stranger squat in your house against your wishes. The sharer and the squatter are not implementors of the same goal.
All earthly art is influenced or derived from previous art. Some is outright copied, then modified to produce something "new". Sometimes this is considered valid, but sometimes it's not. Either way, that is how art (which includes music) works.
But in the US (and other countries with similar corporate influences), copyright is used both to prevent derivative works and to control the consumption of the art.
I could give numerous examples, but with this HN audience I doubt it's necessary. We've all experienced some unreasonable negative related to copyright.
So circumventing copyright puts the ethical control back into our hands. Those who are thieves and freeloaders will always exist, but the rest of us will contribute to supporting the art that we consume - even if we download that art "illegally".
This has nothing to do with the fact that in my country there is very limited access of original content, especially purchasing it legally. I watch almost everything in the cinema that I can and keep torrenting (that is perfectly legal here) to the minimum. However, I think the ultimate solution is here, just like in other cases (like music) that distributors get the hell out from the deal going down between a producer and a consumer. Most of the torrenting is coming from the inability of distributors to keep up with technology and have a reasonable distribution mechanism that works globally.
(edit: only messing - I was just thinking that if I wanted to learn Elixir hacking on some existing project is a good way)
Many torrent clients can monitor an RSS feed, so that would probably be the best solution. I would add the ability to create a user account, and each user could set up a few RSS feeds. In my case, I would like to have one RSS feed with the label "Movies", and one for "TV Shows". Then the main search page would have buttons to add the magnet link to either RSS feed.
I already use http://showrss.info, which provides an RSS feed for all of my favorite shows. So it would be great to manage my own RSS feed for movies and specific episodes.
There are browser addons that can send magnets and torrents straight to the remote APIs of torrent clients.
for smaller sites there are more specialized trackers that allow more control, e.g. chihaya and gazelle.
> it's not concurrent