Source: I wrote this about 6 years ago and maintain it every now and then
Let there always be tools tailored to specific group's needs, the one-size-fits-all approach nearly always ends up dumbing down the interface by removing 'difficult to use' functions and 'complicated' options to present a Fisher-prize interface with big happy buttons and lots of open space.
Also notice that the Books tools weigh in at 47K compressed, there is something to be said for light and nimble tools.
I was going to do it myself but the build scripts you made are incompatible with node 17.x.x (it requires exporting `NODE_OPTIONS=--openssl-legacy-provider`), and they also seem to assume that `react-scripts`, etc. are in the local environment, rather than including them portably as a yarn package dependency and then doing `yarn exec <xyz>`. There's too much weirdness here and I'm not familiar enough with the systems involved to even begin looking into making it compatible (I started, and then realised that I'm supposed to be relaxing from my fulltime job, not dealing with more weird new tech systems that don't behave sensibly).
Later that afternoon, Aunt Polly arrived home to an unpainted fence.
But then you come to something where there is only one way to do that thing, or the competition has some countervailing disadvantage, and suddenly Bob from accounts knows how to use AS/400.
The Internet, WWW, and Google were once power-user tools. Global education has increased only modestly. The biggest change has been in lowering barriers to ease of use, including but not limited to the cost of tools.
This isn't an unalloyed blessing --- the minimum viable user is both a blessing and curse:
- categorization and search;
- voting for every entry;
- maybe reputation system. Where peers regularly publishing good, high quality content get their 'karma' upvoted. So when downloading you can choose people with higher reputation. Or when building catalog of materials in certain area of interest, you can filter only peers with certain threshold karma.
- incentive for peers to keep seeding whatever they downloaded at least until achieving certain ratio (say, 2:1). Maybe by rewarding with 'torrent tokens' that you can spend on downloading, commenting etc.
- comments to every entry (published torrent). With means to combat spam, insults, irrelevant stuff etc. E.g. with voting system, where comments with, say, -3 votes become collapsed. Or/and with aforementioned 'tokens' that you spend when comment.
- personal blacklists to block people whose torrents/comments you don't want to see.
Maybe I'm reinventing the wheel here and something similar already exists, but for some reason not popular (since I never heard of it)? In that case we need to figure out why it didn't shoot and fix it to make it work.
 Your ID will be a cryptographic key a la cryptocurrency wallet.
 With rare but sought after torrents rewarding you with more tokens.
 Personally adjustable.
Its as hard as making a decentralised Google and decentralised YouTube at the same time. Over 75 master students and PhDs put their coding efforts in it at Delft University.
What IPFS, Dat protocol, Tribler, and all others are missing is "adverserial decentralised information retrieval".
For any keyword you type in, the match you want should show first. Trolling, Kremlin bots and copyright police forces should not be able to bring it down. Unsolved problem. How to create a privacy respecting relevance ranking or distributed clicklog.
The core product IMO is just a resource database, with a nice resource description interface, peer syncing, and the ability to search and aggregate results from multiple databases together. A resource could be a web page blob, or a file, or plain text (e.g. a comment). All the social stuff, file viewers, blacklists, etc can be built on top of that, ideally by third parties in an open ecosystem.
Is this what you're talking about? At least as an MVP. :
The resources themselves might be found anywhere (physically at a library, in digital form for purchase, downloadable via torrent, etc.). A true library of babel.
If I knew more about library science, I suspect catalog federation protocols all exist, but there is probably work to be done in making such a thing resilient.
Wikidata has this in their scope (and it makes sense to keep the data there, since a book catalog needs to cross-reference entries for authors, topics etc.), and the Internet Archive OpenLibrary leverages their data.
Even when you have a local IPFS node (which is not common), the "last meter" delivery is still done using HTTP too, via a local gateway
Indeed, it seems very common, but that doesn't make my original point less true.
> the "last meter" delivery is still done using HTTP too, via a local gateway
Yeah, that seems common too, but less "wrong" to call it "IPFS support" in that case, in my opinion.
Running your own IPFS gateway at least makes the content actually fetched from the network (internet) via IPFS while using ipfs.com/cloudflare-ipfs.com is not any different than just using a CDN (except usually you have to pay for CDNs, IPFS gateways seems to be free (for now)).
I think there's still a lot of value in this way of using IPFS: using ipfs.com/cloudflare-ipfs.com makes it much easier to swap to another IPFS gateway (including your own local one) if those gateways ever give you issues.
I was interested in IPFS as a possible asset storage system for assets for virtual worlds. Not for piracy, but as a way to sell virtual world assets with no ongoing obligation to host the content. You store the ownership on some blockchain and the content in some storage system with pre-paid "perpetual care".
I'd seen one offer at $5/gigabyte/forever.
The idea is supposed to be that Filecoin is a derivative of future declines in storage pricing, and profits on that derivative pay for the storage. Unclear if this works. There's one seller offering "perpetual storage" for a one-time fee. But their terms of service say "Data will be stored at no cost to the user on IPFS for as long as Protocol Labs, Inc. continues to offer free storage for NFT’s." No good.
(Just once, I'd like to see an application for NFTs that actually did something besides power a make-money-fast scheme.)
IFPS is a model for providing a content-addressable storage system -- so if you have a particular hash (the CID) of a piece of content, you can obtain it without having to know where or who (or how many people) are storing it. Obviously one site on the IPFS network you're using has to have stored that data, but it only needs to be one site. More sites make it easier and quicker to access. Almost all IPFS nodes are run and offered for free, either by volunteers, major services like Cloudflare or Protocol Labs' dweb.link (which act as gateways so that you can access that file network over http/https) or web services that you pay to host your content on IPFS and manage it through a traditional API, like Textile or Fleek, or Fission.codes.
The key point here for someone with your use case, is that you have lots of flexibility as to who is hosting your files. You can start off just running your own node, or pay someone else, or pay lots of providers that are geographically diverse, or just do it among a bunch of volunteers. You're not tied to a single provider, because wherever your data is stored, you or your users will be able to find it.
Filecoin is a project to fix the incentive issues that can affected historical decentralizing projects like bittorrent, and can lead to decentralizing attempts like this collapse into just a single centralized service like AWS.
Storage providers on the Filecoin network negotiate directly with customers to store files -- they receive payment directly from those customers, but they are also incentivized to offer storage, and also store those files over the long term, because Filecoin has a proof-of-storage setup where storage providers get utility coins in return for proving that they're either making space available, or storing customers' files. It's all very zero-knowledge-proof and fancy, but the important thing is that with this in place, and a flat, competitive market for storage, storage provides on this network have good commercial reasons to offer low prices, and don't care if you're not tied directly to them (in the way that Amazon and other traditional storage providers are tempted to lock you in.)
Filecoin isn't so much a derivative of future declines, but a way to establish pricing in an environment where there actually is a free(r) market for online storage. And IPFS is a protocol that establishes one part of that freer market, which is to decouple who is storing your files, from how you might access them in the future. So far, this seems to be working, with prices being much cheaper than the alternatives, and with some degree of geographical and organizational diversity: https://file.app/
Storage providers are also now also competing on other aspects, such as ecological impact (see https://github.com/protocol/FilecoinGreen-tools ), speed of access, etc, which is what you might expect in a flatter market. We also see larger storage providers providing separate markets for large, >1 Pebibyte customers.
Happy to talk about this more, I'm firstname.lastname@example.org. Big fan of your work, etc, etc.
So why bother with all the crypto stuff?
IPFS works, although performance might be an issue for rarely-requested files since most of them (file blocks) will require multiple hops to get to. If the file's stored on some reliable ipfs node(s), anywhere, you'll be able to access that file eventually.
Filecoin, however noble it may be, doesn't seem to be taking off (yet). IPFS doesn't currently work very well as a storage service because there's no guaranteed storage by others. Even if filecoin achieved mass adoption, you'd have to pay someone to host your files to get reliable 3rd party storage, and that offered fee might not be adequate motivation.
IPFS works perfectly well, though, for hosting file(s) yourself without the risk of getting your network link saturated if popularity of the file(s) increases a lot. As an auto-scaling CDN, it works great, though with poor performance for rarely accessed files. The solution to the file storage problem, it seems to me, is to integrate existing CDNs with ipfs, to allow fast serving of rarely-accessed files for low cost, and then cease serving it from the CDN once a file is popular enough that it's getting duplicated by a bunch of IPFS nodes for free. Maybe cloudflare can plug that rarely-accessed-file gap and offer file access, integrated with ipfs, at zero hosting cost.
If it is open source, then the support may eventually materialize.
However, the repo has no license, and therefore is not free and open source. Technically that code is still proprietary. How much someone writing a libgen app would care if you forked their proprietary app is left as an exercise for the reader.
I tried writing one but it seems like libgen does something weird to not show the download link... s.t. I gave up.