Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Magnetissimo – Self-hosted torrent search engine written in Elixir (github.com/sergiotapia)
403 points by sergiotapia on Jan 28, 2017 | hide | past | favorite | 107 comments



I'll be trying this out, it could work well with Cloud Torrent. I've largely moved away from Torrents tho and switched to using NZB's and apps that automatically download tv shows and films[0]

Here is my stack:

1. NZB downloader - NZBGet - http://nzbget.net/

2. TV Shows - Sonarr - https://sonarr.tv/

3. Films - CouchPotato - https://couchpota.to/

4. NZB Search - NZBHydra - https://github.com/theotherp/nzbhydra (I contribute to this project)

5. Plex - https://www.plex.tv/

Add Torrent support:

6. Client - rtorrent - https://rakshasa.github.io/rtorrent/

7. Web Interface - ruTorrent (warning: PHP) - https://github.com/Novik/ruTorrent

8. Proxy private trackers to XML-RPC - Jackett - https://github.com/Jackett/Jackett

9. Cloud Torrent - web interface - https://github.com/jpillora/cloud-torrent

Additional:

10. Plex stats, analytics and user management - PlexPy - https://github.com/JonnyWong16/plexpy

11. Plex request - allow users to request content - https://github.com/ngovil21/PlexRequestChannel.bundle

Hardware is a pair of HP Proliant Gen8 microservers, Ubuntu 14, Docker, nginx and LetsEncrypt. There is no real easy way to set this all up, you have to do each part of the stack yourself (a docker-compose file would go a long way to simplifying it)

[0] I spend over $200 a month on content subscriptions so I don't feel bad about utilizing the conveniance of NZB downloads + Plex


My stack looks fairly similiar, give or take a few pieces of software, SABnzbd instead of NZBGet for one.

I would highly recommend checking out Radarr (https://github.com/Radarr/Radarr) to replace CouchPotato as I find CouchPotato a very inconsistent experience, and sometimes I am outright confused as to what it is doing (seemingly not snatching the movies I want it to).

Radarr is actually a fork of Sonarr, so you can expect a pretty good experience right from the start and the development has been very active since it was announced around a month ago. The project only being a month old is the only caveat as I am still uncovering minor bugs, but the devs are quick to fix.


Thanks for the pointer - i've found the same with CounchPotato - and the entire quality/category categorization much more confusing than how Sonarr sets it up.


> [0] I spend over $200 a month on content subscriptions so I don't feel bad about utilizing the conveniance of NZB downloads + Plex

What do you spend $200/month on?


Probably cable Netflix Amazon etc


Exactly right - cable is $120 a month, then Netflix, HBO, Amazon Prime (all US) and a handful of Australian services. Still prefer the downloaded version since our connections here can be unreliable (altho Netflix is usually good).


This is a insane setup, congrats ;)

I thought I was doing well running flexget[1] on Ubuntu on a crappy old netbook.

HTPC manager[2] might be worth a look too.

[1] Setup guide: http://www.htpcbeginner.com/blueprint-flexget-setup-guide/

[2] http://htpc.io/


Sonarr + NZBGet really is the winning team (+ extra nerd credits if you run it on an RPi of course). Can't believe I spent years using SickBeard. Discovering Sonarr was like 'ah, that is how it's supposed to be'. Will check out NZBHydra.


I do the same, very similar setup actually, but I still wind up using torrents when someone wants older stuff.

Unfortunately you get mixed results in terms of how things are labeled on public trackers and I haven't had a private tracker account since what.cd died.

I'll have to try out a few of these torrent search tools and see how it works out, but most private trackers didn't want to just take cash and seemed to want you to jump through hoops instead, so unless I can buy accounts via other means I'll probably just stick to public.


> I'll have to try out a few of these torrent search tools and see how it works out, but most private trackers didn't want to just take cash and seemed to want you to jump through hoops instead, so unless I can buy accounts via other means I'll probably just stick to public.

I think most private trackers are trying to avoid developing a reputation like that of IPTorrents [1], which is notorious for allowing people to "donate" for accounts.

Most private trackers also explicitly forbid members from selling invites for money, and, more recently, giving them to strangers.

[1] https://www.reddit.com/r/trackers/comments/4xta11/so_lets_ta...


Yeah, looks like IPT and ultrahdclub might be worth a purchase though.


That is until they disable your account, and offer to enable it for another $30. Visit /r/trackers and there's probably a thread about IPT.


> most private trackers didn't want to just take cash and seemed to want you to jump through hoops instead

The higher end trackers are communities and they typically want people who'll fit in. One way to do this (and the one that's employed) is to only let people in if they're vouched for by someone who's already proved themself (get an invite from an existing member) or if they've proved themself on another tracker (get an invite from the tracker itself through the invite forum on another).

Lowering the barrier to entry by letting people simply buy their way in results in the tracker as a whole being degraded. This can be seen through the recent replacements for What.CD. One of the new trackers chose to heavily limit membership and focus on a small group of prolific uploaders that had proven themselves somehow while the other chose to have pretty much open signups. The pickier tracker, despite far more stringent entry requirements and a tiny fraction of the user count, currently has the larger catalogue and higher ratio of FLAC.

Also, my limited experience with pay-to-enter trackers has been that they're pretty much on par with public trackers. Perhaps even lower quality, definitely not close to the What.CDs of the world.


Yeah, this is why I went the usenet route mostly, people aren't whining about community in the usenet world. They just take cash and get it done. There's not a good indexer that doesn't take cash.

I'm just looking for more 2160p really, but so far no trackers I've found, public or private have satisfied that. Usenet has actually given the best result so far.


> people aren't whining about community in the usenet world

Unfortunately, community is what leads to content. There's an insane amount of stuff that's available on private torrent trackers that hasn't ever touched Usenet and the kinds of people with access to it aren't the kinds of people to put it on Usenet.


I'm the same but I don't torrent and I'm using sickrage rather than sonarr just because I find mono so unreliable, but I hear sonarr is better at finding older series.


My setup and situation is nearly identical (Except I use deluge+deluge-web instead of rtorrent+rutorrent), I'd be interested in setting up Magnetissimo but probably only if there was a docker container as everything else of mine runs in docker (in unRaid). Every once in awhile something old isn't on usenet and I have to get it manually and a local search would be convenient.


I could not get CouchPotato to work at all. I was hoping to at least use it for its renaming capability, but no such luck. Any tricks you want to share with the class?


Wow. I just use SS-Plex and press "Watch Later"


Kodi


Kodi will be great in a few years, at the moment it's interface is extremely clunky and slow.


Really? Have been an xbmc/spmc user for years now got a new device with recent Kodi and default skin (I think) and it might be 'somewhat' clunky but I definitely wouldn't call it 'extremely clunky and slow'. I consider it just fine and not that much of a difference with spmc of a couple of years old. What are you runing it on?


I wouldn't call it clunky and slow, it is just meant to be used from a tv. It's like saying steam big picture mode is clunky if you use it with mouse and keyboard. Kodi has a good api, although there are no good web interfaces yet, I have been also contemplating trying to build a cli around it


It depends what hardware you are running Kodi on. I was running OpenElec on a raspberry pi, it was pretty much unusable for me. When I opened Kodi, the scan for new media would take ages, and often not work at all. Now I had a dedicated Windows 10 HTPC with an SSD, and the experience is night and day versus the rpi.


Yeah, I usually give up and go to Netflix or Plex.

Having to click in to each link, then in again to the stream, then load it up, then back out, and that process again for the next episode. Just one example but there are lots of things like that.


Hope you guys like this and use it! If it's hard to use, tell me. My goal is to make this simple to run anywhere. Part of that will be to use Distillery to create compiled executables.

I rewrote this after an initial version that used Redis as a queue to process. Asking end users to install Redis was a step too far I think, so I stepped back and thought about how to solve this without external dependencies.

I ended up with a simple Elixir and Erlang queue implementation. It works much simpler, just as fast, and no freaky deps. GenServer starts up a worker for each crawler, and it schedules work by itself. It's really strange to use a language so complete, it kind of feels like cheating. Programming in easy-mode.

Pull requests very welcome to the crawler folder, it's super easy to write a crawler for your favorite site!

https://github.com/sergiotapia/magnetissimo/tree/master/lib/...


you could make it a nerves project too, maybe after you get rid of some of those database dependencies.

It'll make you a ~100Mb linux distro that will boot straight into your app in ~10 seconds on a raspberri pi.

http://nerves-project.org/


very useful project, thanks for sharing this!


If the goal is to make it simple to run, why not support SQLite?


Seems like the goal is avoiding dependencies and he's got 3 built in database choices for that already. DETS or Mnesia would probably work well for this.


Thank you, I'll take a look at what those are. If there's a way to remove Postgres from the equation, I'm all for it.


Mnesia is probably what you want in that case, since it turns (D)ETS into a full-fledged database rather than a simple key-value store (think SQL v. Redis). Both are part of Erlang/OTP, though, so you get them for free just by using Erlang/Elixir/LFE/whatever.

There's a hex.pm package called "Amnesia" that wraps Mnesia in a more Elixiry format.


That sounds fantastic. I'm all for trimming the fat if we don't need it. I'll research this, thanks for the suggestion.


If you're using Phoenix or Ecto, look into ecto_mnesia (https://github.com/Nebo15/ecto_mnesia). It wraps up mnesia into a nice ecto wrapper. It's limited on what your primary keys can be.


DETS (and ETS, the in-memory equivalent) are quite simple for basic K/V-ish storage needs. If you need a relational database, Mnesia should do the trick.


You should look for sitemaps. They are often listed in robots.txt. Will save you lots of time!

These sites have sitemaps: http://torrentproject.se/robots.txt https://www.torrentdownloads.me/robots.txt https://thepiratebay.org/robots.txt http://goldtorrents.com/robots.txt https://bitsnoop.com/robots.txt

Enough with the hype: "high performance" What?? Your program is not concurrent: you have one process per website and go through URLs one at a time. You don't even use Bloom filters.


After many years, Bittorrent doesn't cease to amaze me. Such a fantastic technology. Sad, though, that it is primarily known for illegitimate uses. This gives it a bad reputation and makes it hard to promote Bittorrent for more worthwhile applications. For instance, Bittorrent would be a perfect platform for sharing scientific data but when I suggest that to colleagues, they don't take it seriously because Bittorrent is perceived to be that tool for downloading pirated movies. I wonder how this barrier for adoption in academia could be overcome. Perhaps, with a separate network that is targeted at academic uses (think alt-coins). Call this network ResearchTorrent and researchers could use the technology without ever having to make contact with the whole ecosystem around pirated material. Universities could support this network without risking bad PR and getting in conflict with the music/film industry.


I work on a BitTorrent tracker on the side[0] that's used for deploying software at a few companies, but has the unique feature of a middleware layer. Right now, there are two different middleware that require a signature of the infohash (files being shared) before allowing content to be shared. One is for JWTs[1] and upstreamed in the project and the other is for a blockchain and hasn't been upstreamed yet. Using features like this, they can run their tracker on the public internet and not worry about people sharing illegal content. I had never considered it, but this sounds like a great use case for academia, too.

[0]: https://chihaya.io

[1]: https://jwt.io


Very interesting. Does that mean that there would have to be a central authority that has to sign new torrents? For the present purpose that may not be ideal because this system wouldn't be decentralized, and one institution (or a small set of institutions) would have all control. Ideally, there would be some consensus mechanism that determines which torrents are bad and shouldn't be distributed. Also, what happens when the key for singing torrents is stolen? In the case of a company using such a system, that situation can be handled but in a more decentralized system with an emphasis on long-term storage the loss of a key may be a bigger problem.


>Does that mean that there would have to be a central authority that has to sign new torrents?

In this scenario, the source of truth is the location where your users are acquiring their .torrent files.

>For the present purpose that may not be ideal because this system wouldn't be decentralized, and one institution (or a small set of institutions) would have all control. Ideally, there would be some consensus mechanism that determines which torrents are bad and shouldn't be distributed.

Trackers are the centralized alternative to using the Mainline BitTorrent DHT. The performance advantage significant enough they're still in common use after the ubiquity of DHT support in client software. You could build a tracker that serves peer data that it has cached from crawling the DHT, which is an idea I don't think has been done yet. As for a consensus mechanism, that should probably be done out of band and client software could be configured to query it and react appropriately.

>Also, what happens when the key for singing torrents is stolen? In the case of a company using such a system, that situation can be handled but in a more decentralized system with an emphasis on long-term storage the loss of a key may be a bigger problem.

With the JWT system, you can rotate keys, but it will break clients announcing with the old key. Users would have to manually update all their torrents to participate in the swarm again. This is a shame, but there is no official signing scheme in BitTorrent. If the JWT system as chihaya has implemented gathered traction, it could be updated such that clients could actually fetch JWTs regularly from an auth endpoint and then it'd be possible to handle rotation the same way as most OIDC implementations.


I like the idea of a tracker crawling the DHT. Research institutions could sign their torrents with their private keys and the tracker only indexes those signed with approved keys. Institutions register their public keys with the tracker and when an institution loses their keys, only their torrents are affected. Anyone can run such a tracker and there is no central authority. Nice. Is it possible to store signatures in torrent metadata?


http://academictorrents.com/ claims to have 16.33 TB of research data.


Last time I checked (3 years ago) this project didn't seem very active in terms of submission numbers and it appears that this hasn't really changed (7 submissions in Jan 2017). Not sure why, but the UI may be one reason.


I don't think it's fair to expect scientific output to be comparable to Hollywood. Anyway, your colleagues think this is a pirate site?


It is also used for sharing plenty of free, large software too --- many Linux distros are available from torrents, for example.


Looks really interesting, I love goal 1: no JavaScript - YES! Thank you! And elixir is a very exciting language, especially to an ops person where when I read elixir code it actually makes sense to me and seems elegant. Ignoring my technical praise the objective of the product is solid, this is something I've been interested in for a while, I have a solution for usenet but not so much for torrents.


Absolutetly. +1 for the "no javascript" goal.


I'm so sick of 'designers' thinking they're programmers and smashing out some NodeJS or JavaScript heavy crap with no regards to performance or security. Elixir seems to attract some smart minds that think 'big picture' - obviously I'm generalising here and I'm not stating that this particular project is brilliant or anything but it is a trend I've observed.


NodeJS is a dependency in the usage instructions for Debian 7 and CentOS.



This project is using Phoenix. Nodejs is an optional dependency to compile static assets like CSS etc. It's not actually required to write applications using Phoenix.


the team removed all the javascript overnight


Ew, what for?


The creators of Phoenix didn't want to write their own code to manage Javascript and CSS (e.g. rewriting the Rails "asset pipeline" in Elixir), so they just integrated Brunch [1], which is built on Node.js and NPM. I think this was an excellent decision. (Node.js is only needed during development and deployment of your assets. It is not used in production.)

[1] http://brunch.io


That is still pretty awful imo - think quite a few of the people considering Elixir and Phoenix are doing so explicitly to get away from having to Node.js close to their computers in the first place.


I don't see the problem. Why not use Javascript toolkits to package, and develop in Javascript?

I can understand languages not self-hosting their compilers, but I'd at least like it to self-host its toolkits---bundlers, packers, preprocessors, etc.


I personally don't consider it a problem if I don't personally have to touch it. Phoenix does that reasonably well.


+1 indeed, the unnecessary use of javascript everywhere is making the Internet unusable to people with slow connections and computers (possibly 1 billion worldwide) and security conscious users (e.g. Tor browser users)


When someone posts about a GPL violation, people are quick to decry it, and rightly so. But this repo demonstrates, right in the first image, this tool being used to violate copyright, and we applaud that. Why do we have this double standard?


They're the exact opposite from my ethic point of view.

Gpl violation is a violation where the person didn't share information.

Torrentting 'illegally' is the act of sharing information.

If you're all for sharing an free flow of information.. I don't see how these two conflict I my ethical point of view.


It's not a matter of stealing. Theft is a poor metaphor.

Its a matter of preventing the economic systems that support sound engineers, voice actors, grips, actors, and makeup artists from earning enough of a living to produce art.

Consider what has happened to journalism since smaller papers lost the ability to support professionals.


The tricky bit is that GPL is built on top of copyright. In effect it use the legal precedence of copyright against itself.


They weren't arguing from a legal, but from an ethics point of view.


It's not "tricky", it's by design.


In both scenarios the person's property is being stolen.

Their property might be in the form of information, sure.

But one product being more widely distributed does not make any ethical difference. The fundamental is that the information/property has been stolen in both cases.


Copyright infringement has as much in common with stealing as it does with sharing. Both of them are ideological terms to paint the action in a certain light and neither give a complete picture.


There are a lot of things common to both scenarios, and a lot of things different. People understand that what makes the difference to you is whether someone has the legal right to use something, but it's just been explained to you that the spread of information is what is important to some other people (because you asked.) Your ethics don't define ethics and are not fundamental.

People break the GPL in order to steal from the public. People break copyright in order to steal for the public.


> In both scenarios the person's property is being stolen.

With material theft, the person that is stolen from looses access to what is stolen. This is true in the case of a GPL violation. But not in the case of classic copyright violation.

You might argue that copyright violation prevents extracting profit - but it doesn't mean the victim loses access to their property.


It's not a property in the first place. It's nowhere near like physical belongings and it shouldn't be treated as such.


I really cannot see copyright being violated in that screenshot. It is just a listing of torrent files available on one of the indexed sites. You could use Google search to produce a similiar effect (showing links to torrent files being hosted on some web site).

Edit: I'm not trying to be snarky or overtly pedantic, and the list of supported sites does suggest that this might be intended to facilitate piratism. Still, this is interesting and potentially useful technology, and seeing it merely as a tool for copyright infringement misses an important point.


Google does not advertise itself as a way to violate copyright. This repo does, though.


How so? If indexing the content of some sites is seen as advertising copyright violation, so does Google.

If the content of the sites are the problem, well why would Google get a free pass on that point too?

The goal of the application itself, just like Google's, doesn't have anything to do with copyright violation, it just can be used that way.


Have a look at this repo for yourself if you don't believe me. The image demonstrates the tool being used for copyright violations. It's billing itself that way, no?


Are we looking at the same image? The one I see shows a search for the term "hdtv" with some results from an idexed site. I really can't see how the user has in any way violated anyone's copyright.

Edit: Removed an irrelevant addendum.


It's not like the image is somehow accidental or out of the author's control. I don't know how you wouldn't read this as advertising a way to violate copyrights, it's an explicit advertisement of the sort of content you can find.

If the image weren't a suitable representation, then the author wouldn't have chosen it. You're trying to downplay that these sorts of decisions are intentional.


This is like an advertisement for guns, with a laser sight on the gun, pointed at someone.

Gun ownership is not a crime. But the gun is shown in a scenario where a crime is about to be committed.

At first I didn't have any issue but this analogy seems to have convinced me it's a bad image to use.


Or the image was a random search and there isn't a team of marketing and legal experts to interpret a quick demo in the worst way possible.

But I'm sure the maintainer will be happy to modify the picture with a search for "big buck bunny" if you ask nicely.


I think it's because of the search term "hdtv" results you see in the screenshot. This wouldn't have been the case if the search terms like, say "ubuntu iso" being used.


There is no we.

Some of the people decrying GPL violation also decry this kind of copyright infringement.

But, also, GPL has a clear purpose and limits itself to that purpose.

The traditional rights protection organisations have sought to extend copyright laws in weird ways.

In the UK "format shifting" (eg ripping a cd you bought to mp3) has gone in and out of law. I think it's currently unlawful to rip a CD that you have bought to mp3.

Backing up my purchased media should not be a copyright infringement, and I have no problem breaking the law to do so.


It's not uniform, but the zeitgeist certainly supports GPL or tech copyrights and disavows music and movie copyrights.

And why are you bringing up backups? What does that have to do with torrenting?


It does support GPL, at least in the tech "bubble". I agree with movie copyrights, but not longer than 10 years and perhaps 25 for music/texts.

The industry has lobbied so hard that the good will of the younger generation is gone. (It gets better now with Netflix and offers like this). But I don't feel that somebody broke the law when he downloaded music/videos and I am part of this younger generation that grew up like this.


I agree. Also it's not like my favourite artists receive any significant amount of money from Spotify, Youtube or similar services. The vast majority goes to the top 0.5% of the megastars and their record labels. Paying those services shouldn't soothe your conscience one bit when it comes to the poor starving artists and content producers.

Instead, when I buy their albums on Bandcamp.com, that particular artist gets 85% of what I'm paying! Also I get to own an actual copy instead of the ridiculous deal streaming offers. Storage is so cheap these days! But don't tell the consumer or we can't sell them streaming services on one end and data bundles on the other.

Unless you listen almost exclusively to modern commercial "pop" music--though calling it "popular" has been a misleading term ever since people have been able to listen to music via channels that the "pop" industry turns a blind eye to. Still it's a valid choice, of course, if you enjoy that music! And honestly quite a lot of "pop" music is pretty good. And I presume that the people who produce Justin Bieber's songs are reasonably well-compensated for their work, but not fairly if you look at where the money is going. So you're not really paying to support the content producers here either, a lot of your money is also being spent on unnecessary infrastructure designed to keep out the other content producers and to shove the music into the ears of people that don't even really enjoy it.

Someone elsewhere in this thread said to look at the news agencies and journalism doing badly and dropping in quality. The battle here is over advertising versus adblockers. Well here's some news for you: They're supposed to be a mirror of society, a big factor is living in denial of the horror that it can't possibly really be that bad. Another one to think about for a bit: All those poor blocked ad networks, who do you think profits most from those ads? The quality journalists or the listicle clickbait writers filling the web with attention-slurping crap?

And at least for the pop-music industry I can kind of understand why the big labels and rightsholders are very quiet about the unfair distribution and very vocal against anything that challenges it (because they profit very much from the listicle-equivalent of music). But for news agencies I think it's strange or just plain stupid to cry about adblockers. People running adblockers hurt the listicle clickbait writers way more than they hurt quality journalists, like disproportionally so. So it's killing off some dead weight competition. On the income side, news agencies, even smaller ones, have sufficient brand identity that they don't need a 3rd-party ad network, they can 1st-party direct sell their ad-space to an advertiser. Adblockers don't tend to block 1st-party ads (and if they do, THAT is something I'd be willing to add an exception for). Cuts out the middle-men, who also royally screw the incentives of being intrusive or not. Can you imagine a direct sale of ad-space going like: ".. and could you please cut up the article in four pages so that people have to click through, so that they load up my ad more often?" Of course not, it hurts both the advertiser as well as the publisher. But add in a middle-man, with a sufficiently clever cost-per-impression scheme, and suddenly the incentives are against almost everybody else: publisher, advertiser and consumer.

TLDR; The idea of copyright is badly broken, the industry doesn't understand this yet, advertising is not the answer, the current state of affairs is suboptimal for just about everybody.


Mostly because the RIAA and MPAA managed to make themselves universally hated in the late 90s. The backlash extends far beyond the perennially frugal teenage downloaders and all the way to at least one prominent Harvard Law School professor.

https://www.ted.com/talks/larry_lessig_says_the_law_is_stran...


Your philosophy is that, as a society, we shouldn't enforce laws if the person/entity being affected by illegal behavior is unpopular?


I have not commented at all on my own philosophy and I have not studied this topic even a tenth as deeply as the law scholar I mentioned in a sibling comment. I do think he made a compelling case that criminalizing the behavior to begin with has some corrosive effects, though.

"This war must end. It is time we recognize that we can't kill this creativity. We can only criminalize it. We can't stop our kids from using these tools to create, or make them passive. We can only drive it underground, or make them "pirates." And the question we as a society must focus on is whether this is any good. Our kids live in an age of prohibition, where more and more of what seems to them to be ordinary behavior is against the law. They recognize it as against the law. They see themselves as "criminals." They begin to get used to the idea.

That recognition is corrosive. It is corrupting of the very idea of the rule of law. And when we reckon the cost of this corruption, any losses of the content industry pale in comparison."

http://www.wsj.com/articles/SB122367645363324303

Possibly of interest, Lessig lost a US Supreme Court case regarding the constitutionality of the a copyright extension act in a split decision 14 years ago.

https://www.legalaffairs.org/issues/March-April-2004/story_l...

He authored the Creative Commons license and is currently a possible candidate for the Supreme Court himself. Note that he's not arguing for copyright infringement, but for a reform of copyright law.


I think it's because of the power difference. When there's a GPL violation, it's easy to frame it as big greedy company vs selfless small developer. Everybody loves rooting for the underdog. In torrenting, it's small time person vs big greedy company.


This is because those people are choosing more access for themselves in both cases. They rather have more "stuff" at little or no expense and they dismiss their moral or legal obligations when it becomes inconvenient.

If someone truly believed in the moral aims of the GPL then they probably wouldn't pirate media either. That sort of pure and consistent philosophy is an increasingly rare and admirable one.


This ain't really an accurate portrayal of the aims of the GPL.

The FSF believes that all software should be free. The GPL was created to attempt to establish such freedom by leveraging the copyright system. If the copyright system were to be abolished right now worldwide, the GPL would be obsolete, since all software would be free.

Piracy is thus not antagonistic toward the goals of the GPL; rather, it's complementary. Piracy does not reduce access to information, and neither does the GPL; thus, neither are at odds with the other. They're simply different implementations of the same goal: one implementation ignores copyright entirely, while the other interpretation uses copyright against itself.


The GPL uses lawful means to encourage sharing and discourages restricting information.

Richard Stallman and the FSF would encourage you to not engage in restricted media if you don't believe in it. . .not disregard the laws that keep the media restricted.

Piracy and the GPL are NOT implementations of the same goal. The GPL grants a framework for sharing and encourages others to do the same. . . Piracy is availing yourself of something against the wishes of its author.

To say that the GPL and piracy implement the same goal is to suggest that sharing your home with a friend is the same thing as having a stranger squat in your house against your wishes. The sharer and the squatter are not implementors of the same goal.


The goal of copyright law, in the US (from the Constitution, Copyright Clause) was: To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

All earthly art is influenced or derived from previous art. Some is outright copied, then modified to produce something "new". Sometimes this is considered valid, but sometimes it's not. Either way, that is how art (which includes music) works.

But in the US (and other countries with similar corporate influences), copyright is used both to prevent derivative works and to control the consumption of the art.

I could give numerous examples, but with this HN audience I doubt it's necessary. We've all experienced some unreasonable negative related to copyright.

So circumventing copyright puts the ethical control back into our hands. Those who are thieves and freeloaders will always exist, but the rest of us will contribute to supporting the art that we consume - even if we download that art "illegally".


The GPL cases tend to be someone making money off their violation. Not that piracy is okay, but the situations aren’t too similar.


That you even use the term "piracy" to mean file sharing rather than murder and pillage on the high seas is testament to the past efforts of content industries.


Actually I am trying to avoid using GPL as much as I can (not very successful) because I do not agree with the intent of the license.

This has nothing to do with the fact that in my country there is very limited access of original content, especially purchasing it legally. I watch almost everything in the cinema that I can and keep torrenting (that is perfectly legal here) to the minimum. However, I think the ultimate solution is here, just like in other cases (like music) that distributors get the hell out from the deal going down between a producer and a consumer. Most of the torrenting is coming from the inability of distributors to keep up with technology and have a reasonable distribution mechanism that works globally.


Looks like a useful tool, but I kinda wish this kind of programs had a CLI. It seems to be doing one main thing (searching), and it could be super useful to be able to pipe the results to a downloading program. Some programs can benefit vastly from a GUI, this... I'm just not sure what's the point. Kudos to the developer nevertheless for the tool itself and for the choice to go with Elixir.


Sounds like you found yourself a wee evening task - hack a command line interface for this tool :D

(edit: only messing - I was just thinking that if I wanted to learn Elixir hacking on some existing project is a good way)


I might have to start an anonymous GitHub account so that I can contribute to this. It would be fantastic if there was a way to start a download directly from the web UI.

Many torrent clients can monitor an RSS feed, so that would probably be the best solution. I would add the ability to create a user account, and each user could set up a few RSS feeds. In my case, I would like to have one RSS feed with the label "Movies", and one for "TV Shows". Then the main search page would have buttons to add the magnet link to either RSS feed.

I already use http://showrss.info, which provides an RSS feed for all of my favorite shows. So it would be great to manage my own RSS feed for movies and specific episodes.


> It would be fantastic if there was a way to start a download directly from the web UI.

There are browser addons that can send magnets and torrents straight to the remote APIs of torrent clients.


Can anyone suggest a good torrent tracker suitable to run a torrent site? Opentracker [0] seems to be abandoned and the author doesn't reply.

[0] http://erdgeist.org/arts/software/opentracker/


opentracker is what is used by the high-throughput trackers. the protocol is fairly simple and doesn't change a lot and the implementation is fairly minimal, so they don't need to update the codebase often, it does it's job, it's mature.

for smaller sites there are more specialized trackers that allow more control, e.g. chihaya and gazelle.


Has anyone hosted this somewhere yet? I was hoping I could just run it locally but you have to crawl all those sites every time, and it's not concurrent...


I just setup a test site based on it http://v.reni.com:4000/


> leveraging Elixir's GenServer and Erlang's BEAM VM.

> it's not concurrent

(。´・ω・)?


It's concurrent in that multiple crawlers run at the same time. But I do not concurrently crawl single websites. An initial version had that, and Magnetissimo easily crawled 600 torrents every page 3 seconds (not hyperbole). It just pounded websites and I didn't want that. It was bad karma.


Are there other programs that offer this same function?


Not a self-hosted program. But, Skytorrents [1] was featured on HN recently [2].

[1] https://www.skytorrents.in/

[2] https://news.ycombinator.com/item?id=13423629


Another "inspirational" list thanks to the US government, from a HN comment[1]

[1] https://news.ycombinator.com/item?id=13237667

[2] https://ustr.gov/sites/default/files/2016-Out-of-Cycle-Revie...


very useful tool




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: