If you wish to donate bandwidth or storage, I personally know of at least a few mirroring efforts. Please get in touch with me over at legatusR(at)protonmail(dot)com and I can help direct you towards those behind this effort.
If you don't have storage or bandwidth available, you can still help. Bookwarrior has requested help  in developing an HTTP-based decentralizing mechanism for LibGen's various forks. Those with experience in software may help make sure those invaluable archives are never lost.
Another way of contributing is by donating bitcoin, as both LibGen  and The-Eye  accept donations.
Lastly, you can always contribute books. If you buy a textbook or book, consider uploading it (and scanning it, should it be a physical book) in case it isn't already present in the database.
In any case, this effort has a noble goal, and I believe people of this community can contribute.
P.S. The "Pirate Bay of Science" is actually LibGen, and I favor a title change (I posted it this way as to comply with HN guidelines).
 bitcoin:12hQANsSHXxyPPgkhoBMSyHpXmzgVbdDGd?label=libgen, as found at http://184.108.40.206/, listed in https://it.wikipedia.org/wiki/Library_Genesis
 Bitcoin address 3Mem5B2o3Qd2zAWEthJxUH28f7itbRttxM, as found in https://the-eye.eu/donate/. You can also buy merchandising from them at https://56k.pizza/.
Edit: Found the other comment where you link to the seeding stats: https://docs.google.com/spreadsheets/d/1hqT7dVe8u09eatT93V2x...
Books are a safe bet to pirate
This is an existential threat to the deep-pocketed likes of Elsevier et al. They will use the law to make an example of anyone too close to their sphere of influence; so if you are in the US or the EU; support the efforts of LibGen vocally and loudly, and contribute anonymously, but don't risk your neck to the extent where they can get a hold of you.
There are plenty of ways to support the effort safely though. Make sure people who wish to access scientific papers and books know where to go, and make sure your elected officials know about the need for publicly funded science to be published free of charge, open access, (retroactively too).
There's no easy solution for scanning physical books, is there?
 http://1dollarscan.com/ (no affiliation, just a satisfied customer, can't scan certain textbooks due to publisher threats of litigation)
I've visited their office -- located in an inexpensive industrial district of San Jose -- on multiple occasions. They have a convenient process for receiving books in person.
I believe the owners are Japanese and the operation reminds me of the businesses I visited in Tokyo: quiet, neat, and über-efficient.
I wish the same could be said for the Tokyo office I work in!
I used to participate in the "bookz scene", well over a decade ago. Raiding the local public libraries --- borrowing as many books as we could --- and having "scanparties" to digitise and upload them was incredibly fun, and we did it for the thrill, never thinking that one day almost all of our releases would end up in LibGen.
my disappointment is immeasurable and my day is ruined
Continuing to be dense, why is there a difference between their "database dump" and the total of all the files they have?
Thus, 32 TB of books (over 2 million titles), 3.2 GB database.
To make sure I'm understanding this correctly:
The Libgen Desktop application (which requires only a copy of the database) would then use the DB metadata to make LibGen locally searchable, and would only retrieve the individual books/papers on request?
A more advanced version of this architecture is used by pirate addons for the Kodi media center software. Basically, you have a bunch of completely legal and above board services like Imdb that contain video metadata. They provide the search results, the artworks, the plot descriptions, episode lists for TV shows etc. Impossible to sue and shut down, as they're legal. Then, you have a large number of illegal services that, essentially, map IDs from websites like IMDB to links. Those links lead to websites like Openload, which let you host videos. They're in the gray area, if they comply with DMCA requests and are in a reasonably safe jurisdiction, they're unlikely to be shut down. On the Kodi side, you have a bunch of addons. There are the legitimate ones that access IMDB and give you the IDs, the not that legitimate ones that map IDs to URLs, and the half-legitimate ones that can actually play stuff ron those URLS (not an easy taks, as websites usually try to prevent you from playing something without seeing their ads). Those addons are distributed as libraries, and are used as dependencies by user-friendly frontends. Those frontends usually depend on several addons in each category, so, in case one goes down, all the other ones still remain. It's all so decentralized and ownerless that there's no single point of failure. The best you can do is killing the frontend addon, but it's easy to make a new one, and users are used to switching them every few months.
Just like any other distributed system, this is vulnerable to organized take downs and scare tactics. There was a whole bunch of mirrors of Pirate Bay, yet once most of Europe's legal systems adopted the "sharing is theft" mindset, it became pretty much impossible to find one.
Single decentralized service, providing access to all content, national and international, free of DRM, for all platforms, for a proper, fair, and non-monopolist price.
That will pull all the users who are willing to pay for content over to the paid service, and those who remained were not willing to pay regardless of what you did anyhow.
Alas, Yongle Encyclopedia is almost completely lost now. Archiving is harder than you think.
Preservation is easy if you don't get invaded.
All joking aside, I do wonder wither digital or analogue formats are better able to survive into the distant future.
* What impact will DRM have on the accessibility of our knowledge to future historians?
* Is anything recoverable from a harddrive or flash media after 500 years in a landfill?
* Will compressed files be more of less recoverable? What about git archives?
* Will the future know the shape of our plastic GI Joes toys but not the content of the GI Joes cartoon?
There are 5000 year old clay tablets we can still read.
There are centuries old documents on paper, vellum etc. that we can still read.
I personally have decades-old paper documents I can easily read, and a box of floppies I can't.
It's not just a problem of unreadable physical media, I have a database file on a perfectly readable HD that was generated by an application that is no longer available. I might be able to interrogate it somehow, but it won't be easy.
Digital formats and connectivity make LOCKSS easier, so that's a plus. There's less chance of a fire or flood or space-limited librarian destroying the last known copy. However, without archivists actively transforming content to new formats as required, it might only take a few decades before a lot of content starts to require a massive effort to read.
Let's say the probability that: a single copy of a physical book survives 1,000 years, is found and is understood by an archaeologist, is pB and the probability that a single copy of a book on an SSD survives 1,000 years is found and understood by an archaeologist is pD. Even if pB is far larger than pD it could be the case that there might be so many more copies of single book held on SSDs thus making it more likely the book will survive via an SSD than a physical book. On the other hand the technology to recover data from SSDs might not exist in 1,000 years.
It could also be the case that each generation would copy these books onto new digital media providing an unbroken chain of copies. The oldest copy of the Iliad is Venetus.A which is from 1000AD (1000 years ago) despite the Iliad probably first being written down in 800BC (2800 years ago). It was copied from earlier copies of copies of copies.
I really don't know how this will play out and I've been unable to find research on how long SSD and flash memory based media survives especially if buried in a landfill.
* - If archaeologists exist in the future. The current push from the STEM boosters to defund and de-emphasize the humanities may result in a near-future without archaeologists or funded archaeological projects. Over 1,000 years the entire field could die.
Yes. That's what I mean by LOCKSS being easier.
> is found and is understood by an archaeologist,
There is a problem with merging these two probabilities.
The probability of finding a book is of course massively smaller than the probability of finding a digital copy.
The probability of understanding a book is so much greater than the probability of understanding a file on a disk.
This makes it more likely that the physical book will survive in a meaningful way.
> It could also be the case that each generation would copy these books onto new digital media
This is what I mean by archivists actively transforming the content. Regarding written content like the Iliad, copies and translations can be made centuries apart. Content in digital formats may need to be transformed whenever the application that reads it is discontinued.
The nice part of a book in an apocalyptic scenario is that you can copy it even if you don't know the language. You don't need a special tool for this, only one capable of marking a surface. It wouldn't be fun or fast, but it's possible and it's what monks did for centuries. Would archeologists 1000 years from now be lucky enough to find a SATA cable too?
: "X-rays reveal 1,300-year-old writings inside later bookbindings" https://www.theguardian.com/books/2016/jun/04/x-rays-reveal-...
Redundant, shared servers ARE a forever solution. Making sure your data is one one of the ones that makes it seems like a vastly easier proposition to me than writing data to clay tablets and trying to keep those from ending up in a dump somewhere.
If we are talking about archaeologists, rather than historians, even ASCII and Unicode could be a challenge to work out.
Yes we should learn from history, but we should also not assume that everything that happened before will happen the same way again, given how much of our world has changed.
You can reuse interfaces easier on data, and current ML could probably pull some of the weight of interpreting old data right now, not to mention what we have 50 years from now.
Compare the capabilities of digital historians today to those 10- and 20-years ago respectively. It’s night and day.
If you found a mysterious archive object and had no idea what it was - CD-R, hard drive, SSD, whatever - not only would you have to reinvent an entire hardware reader around it, you would also have to work out the file structure, extract the data (some of which could be damaged), and reverse engineer the container file formats and the data structures inside them.
If you got all of that right, you'd eventually be able to start trying to translate the content of the text, audio, images, videos (how many compression formats are there?) into something you could understand.
A much more advanced civilisation would struggle with making a cold start on all of that. In our current state, we'd get nowhere if we didn't already have some records explaining where to begin.
1. Even if the CD-R has been crushed and shattered you could use a modern and cheap microscope to read continuous pits and lands off the disk [0,1]. It would be clear to anyone familiar with information theory how to translate the pits and lands to a series of set of arbitrary symbols which encode data.
2. This data would at first be meaningless. However the mathematical relationships of a simple error correcting code would stand out. This would allow them recover corrupted data. Once the error correcting code was stripped out they have a transcript of the raw data.
3. They would notice a pattern in the data. There would be long high entropy regions and then very short low entropy regions. They would probably notice that some of the low entropy regions had every 8-th bit set to zero (ASCII) and if taken in 8-bit chunks these regions had the roughly the same number of symbols as in the latin alphabet. If they were familiar with English they might quickly decode these regions using letter frequency correspondence with another English text.
4. The high entropy regions would be far harder to decode. However these future archaeologists would be faced with the obvious data patterns of frames of an MP3. Decoding the first MP3 would be a serious project involving many institutions over many years but once it was done it would allow the decoding of all artifacts that use the MP3 and related encoding formats. Possibly someone would find a "rosetta file" , a disk that contained both a .wav file and an encoded MP3 of the same song. More likely someone would find an MP3 player and then reverse engineer the decoding algorithm.
: "Being able to see the tracks and bits in a CD-ROM" https://superuser.com/questions/870776/being-able-to-see-the...
: "CD-ROM Under the Microscope" https://www.youtube.com/watch?v=RZUxemOE07Q
By which I mean, many file formats are syntactically much simpler and more obviously structured than natural languages. It might take an entire field to reverse engineer weird formats like .DOC once all knowledge gets lost, but I doubt this will be the case for bitmaps or UTF-8 ...
And any modern compression is probably right out without technological continuity.
This is what the philologist would see:
How it would probably go:
1. Hmmmm there are only two symbols A and B, these symbols can't be words since no language has only two words. Thus the words must be made of a string of these symbols.
2. Every 8-th symbol* is a A. Lets try putting the symbols in groups of size 8.
3. These groups of 8 can't be words because they repeat far too often and they would only allow 128 possible words. Thus these groups of 8 might be letters in an alphabet.
4. Does the frequency of this possible letters fit any known languages? Yes, English.
5. Which group of 8 is "e"?
A few minutes later and the clay tablet is decoded.
* - This is not always true in utf-8 but true in most encoding of Latin alphabets including this example. Even with some variable length characters thrown in this fact would stand out.
It's even fairly plausible that the utf-8 numerical encoding can be reverse-engineered from a few samples; enough languages' text generally only use characters from few enough blocks to identify. If you're really motivated, you can probably work your way through most of the languages with phonetic writing systems.
But then there's CJK Unified Ideographs, where the characters that get used are scattered essentially randomly because the ordering is only relevant if you already know how many and which characters were encoded at what point in the history of Unicode.
There are large swaths of Unicode which, if somehow totally lost, would essentially require finding font data or character reference tables to recover.
Code breakers have decoded ciphertexts which used a code such that each word was replaced with a number. To make it even harder common words would be replaced by more than one numbers to defeat common frequency analysis techniques. This was done often with pen and paper.
Yuri Knorozov managed to decipher the Mayan script. That was a significantly harder task than recovering UTF-8 mappings because he has very little to work with on the source language (he did have somethings).
 https://www.lockss.org/ ; https://en.wikipedia.org/wiki/LOCKSS
If we can't effectively warn a future (>10,000 years) generation to stay away from something that may harm or kill them, what chance do we have of making a universally understandable archive of data?
Just about everybody in academia uses it, too, especially in the case of Scihub. I can't imagine taking the time to actually check whether I have access to some journal when I want to read a paper, let alone jump through all the hoops before you can get a PDF. The first thing we did when my partner's paper was recently published was check to see if it was on Scihub yet. (It was!)
Today I learned that Library Genesis is actually "powered by Sci-Hub" as its primary source.
So I guess they're sister projects by similarly minded people (who seem to be mostly/originally based in Slavic countries, which I find interesting culturally - perhaps it's due to a looser legal environment + activist academics?).
> Just about everybody in academia knows about it.
That really says something about the state of society, this tension between copyright laws (and the motivations behind them) and the intellectual ideal of free and open access to knowledge.
What do you mean?
There are well known cases of genetics and cybernetics being banned for ideological reasons during Stalin's time. Scientific books and articles of convicted 'enemies of the state' were dangerous to possess in that time too. Some scientists used ideological 'arguments' in scientific debates which were dangerous to argue against.
But all that, AFAIK, ended after Stalin's death in 1953.
Moreover, I've never heard anything about mathematics in this regard.
 An example was when Margulis won the Fields medal: https://en.wikipedia.org/wiki/Grigory_Margulis. There are many other examples too.
It was never in Soviet ideology to hide knowledge behind paywalls. See, for example, this  post about Mir publishing house and warm comments of Indians who grew up with their books. Sci-hub's ideology is just continuation of this approach.
Actually, that was the point of what I was saying—the mathematicians had to be inventive and thus passed around preprints that they knew would also be read in the West.
You see the same situation with Asia --- it's a collectivist culture, they have a very different perspective on IP in general.
I realize that doesn't solve the access problem for most people as most of the users who need this research might not know how to use usenet or even be familiar with it at all, but I think the first major concern would be to secure the entire repository on a stable network. Usenet seems like a good place for that even if it doesn't serves as a means of distribution. Encrypting the uploads would make them immune to DMCA takedowns provided that the decryption keys weren't made public and were only shared with individuals related to the maintenance of the LibGen project.
However, in my personal experience, I have seen no issues downloading old data from any binary group. At least not with the provider I have. In fact, just this past week I obtained something sizable (several GBs) with no damaged parts so didn't even need the parchive recovery files at all. This has always been my experience. I've never seen anything like the pruning you are talking about. That sounds more like an issue with your specific provider to me.
The hours that LibGen saved me in gathering all the sources for my research must be in the hundreds. Thank you!
That might change though as people start including video + data within papers and have new notebook formats that are live and contain docker containers/ipython, etc.
It's a shame we can't just mail these around.
I picked up 32TB for just under $500 with discount over the holiday that way.
The theory is that this is a form of market segmentation, where enthusiasts/companies are willing to pay more for a bare drive regular consumers.
For instance, I got all white label WD80EMAZs (256MB cache, non-SMR, same firmware as the Reds) in this batch, so I had to insulate the 3.3V pins.
There are also true Reds, 128MB and 512MB cache drives, helium filleds, 7.2K HGSTs slowed to 5.4K, and other variants.
Encrypted shards partially solves this, but then you hit the quandry of "But what if I have a shard of something illegal or undesired enough to upset the wrong people?" which has not been thoroughly tested in our legal system.
For example right now in Germany I can get a WD 8TB USB 3.0 drive for 135€ but the cheapest internal 8TB drive costs 169€.
Any idea why? It's puzzling.
In large ZFS arrays, many people are using them with great success, at no greater or lesser annual failure rate than the expensive enterprise hard drives.
I've read these reports as well, but I can say that it's not my experience (we've gone through a few rounds of shucking at the Internet Archive, for economy and in one case necessity after the 2011 Thailand floods pinched the supply chain). Our raw failure rates on shucked drives are significantly higher, and the drives themselves are typically non-performant for high-throughput workloads (often being SMR disks/etc, though hopefully the move away from drive-managed SMR will finally kill that product category off).
Shingled magnetic recording (SMR) is a magnetic storage data recording technology used in hard disk drives (HDDs) to increase storage density and overall per-drive storage capacity ... The overlapping-tracks architecture may slow down the writing process since writing to one track overwrites adjacent tracks, and requires them to be rewritten as well.
Of course, that wouldn't explain the difference between a WD external drive and that same drive as an internal drive - assuming that WD actually manufactures both (and doesn't just license the name provided the 3rd party uses their drives)...
EDIT: To find libgen's torrents health, check out this google sheet: https://docs.google.com/spreadsheets/d/1hqT7dVe8u09eatT93V2x...
Thanks frgtpsswrdlame for the heads up.
I'm not sure how people setup the rotation though, that can't be an incredibly common feature but I could be wrong.
I want a full mirror, and ain't nobody got time to deal with 2000 torrents, many of which have no seeders. That's a really dumb way to run this particular railroad.
Also the UI for adding many torrents is much nicer than for selecting a non-trivial subset of files inside a single torrent. Also many parts of the ecosystem handle partial-seeds that do and will only for the near future seed a subset and not leech any other parts. They often get treated as leechers, despite not really being leechers.
TL;DR: 2k files are just a watchfolder and a cp * watchfolder/ away from working. Scaling does not work with one fat 32TB, however.
See here for example: https://news.ycombinator.com/item?id=16521385
Unfortunately the performance issues and overhead is just too much.
For example mirroring project gutenburg.
I think this perspective really depends how you're trying to use IPFS. For example, the ease of use of running a local IPFS node has improved a ton with IPFS-desktop & companion, and tools like ipfs-cohost (https://github.com/ipfs-shipyard/ipfs-cohost) also improve usability and tooling for shared community hosting. I think this has actually seen a ton of progress and end consumer usability has improved significantly in the past year (and is now even coming out-of-the-box in browsers like Brave and Opera!)
I definitely hear that running a beefy IPFS node for local hosting/pinning still needs work, but pinning services like Infura, Temporal, and Pinata have helped abstract some of those challenges from individual applications like this. From a developer perspective, there are a lot of performance improvements for adding, data transfer, and content resolution coming down the line very soon (https://github.com/ipfs/go-ipfs/issues/6776), and there's also been a lot of work improving the ipfs gateways and docs to support the dev community better. I definitely think there is still lots of room for improvement - but also lots of progress to recognize in making IPFS usable to exactly these sorts of applications. Check out qri,io - they're doing collaborative dataset hosting like this and it's pretty slick!
> pinning services like Infura, Temporal, and Pinata have helped abstract some of those challenges
I wonder if you omitted Eternum on purpose :P
(For context, I created and run Eternum, and that experience is mostly where my opinion of IPFS comes from.)
If you ever want to chat about how we can make pinning services on IPFS easier to run, would love to chat! I know cluster has been researching how to improve "IPFS for enterprise" usage and would really appreciate the user feedback!
I would love to chat. My #1 request is to make pinning asynchronous, and generally improve pinning performance. I think that's most of my frustration, followed by slow DHT resolves, followed by large resource usage by the node.
The basic problem is that the DHT is currently not working, and IPFS is using the DHT in a very demanding way compared to, say, bittorrent or DAT.
I know that there are some fixes in the works, but the next releases really need to solve the DHT problem, otherwise no amount of usability improvements is going to matter...
That said, maybe Dat would be better, especially if it works well.
The frontend would still be a user-friendly HTTP web-application (or collection of several) that pulls (portions of) the archive from the distributed/resilient backend to serve individual files to clients.
The backend can be a relatively obscure, geeky, post-BitTorrent p2p software like IPFS or Dat, as long as those willing to donate bandwidth/storage can run it on their systems. This is a vastly different audience from 'most people'.
The real question is which software's features best fits the backend use-case (efficiently hosting a very large and growing/evolving, IP-infringing dataset). Dat  has features to (1) update data and efficiently synchronize changes, and to (2) efficiently provide random-access data from larger datasets. Two quite compelling advancements over BitTorrent for this use-case.
There's also ZeroNet, though IDK if it can handle the traffic.
Microdots managed about 32MiB / square centimeter (from the "you could fit the Bible 50 times in a square inch") measure. That's a completely arbitrary density to achieve, since it was photographs which were enlarged and shrunk, and you could hypothetically use any encoding for your gold etchings, and it also leaves out the question of "how fine can you etch gold plates while still having the engraving be 'robust'".
But in any case, that gives you a target area of ~325 square meters for a 100TiB archive.
That's a lot, but not like a crazy obviously impossible number like a million square km or something.
(Which isn't necessarily impossible, but still a lot, especially for an interstellar probe.)
That puts it in the realm of the Voyager probes for total cost.
But I was also extremely generous with the thickness in the first post. In real life they would almost certainly be closer to .05mm than 1.2mm, and probably not made out of solid gold.
The point was to show that even with some rather pessimistic assumptions the project was within human scale and even had some precedent.
Re: Falcon 9's, Voyager was lifted on a Titan IIIE, which had a LEO payload of 15,300 kg, compared to 22,800 kg for the Falcon 9 to LEO. Assuming Voyager was near the capacity of what could be ejected from the Solar System by a Titan IIIE, you'd need to send up ~7 Falcon 9's to Voltron together in orbit.
I'd bet that any civilization would be flying there ASAP to see what it is about.
The crux is really making a simple radio transmitter that can survive that long but the parameters are very weak (ie, the frequency is allowed to shift so long as it remains somewhat in a region that can be received on earth's surface, it needs a very long last power source but doesn't need a lot of power, unidirectional transmission is fine as long as it arrives).
Keep in mind that they don't know our written or computational language and there's nothing about our technology that is inherently self-explaining/obvious.
Even the assumption that they'd use binary computers (rather than trinary, or other technology not based around electrical voltages) is open to debate.
If you assume motivated readers and human-level intelligence, you could end up with good results. It might take a decade or three, and a lot of mental firepower, but they could get there.
(The outer layer is the hardest, since our information density is lowest. Our "description of how to build a magnifying glass" might cover just the basic optics of curved glass and a very basic description of how to get to glass and how to curve it correctly, leaving a lot of the details up to the finder. After all, we did it without help. We're not so much trying to solve this problem for the finder as help them on their way.)
So, before jumping in to argue, remember I'm stipulating decades of dedicated effort by presumably an interested consortium of... whatever they are. I think we can safely stipulate an amount of effort at least as large as our society has dedicated to, say, Linear A and B, or the Voynich manuscript. I'm not trying to spec "Ugh wanders out of the jungle, sees our pretty rock, and personally has a 20th century civilization up and running in 10 years" or anything crazy.
Quite coincidental, there is currently an Anime running named "Dr. Stone" which is quite exactly about that; jump starting human civilizatio from the stone age to modern day as fast as possible. Atleast in-story it's been a few months and they're currently building radios and have a waterwheel generator.
In Vernor Vinge's "A Fire Upon the Deep", a very advanced civilization in the outer galaxy that can't reach where we are for $REASONS has as a persistent hobby speculation on the fastest way to bootstrap advanced civilizations, assuming essentially-perfect knowledge of physics instead of blundering around.
Not necessarily for aliens ... but why not keep a backup in a safe place outside the dangers of earthlings ?
OTOH, I think a sufficiently advanced alien intelligence will be able to decipher the information structures we use regardless of differences in technology. It's possible though there will be missing links in that archive, which will need to be supplemented with a primary secondary and high school curriculim.
If one were to receive an object, how would it be indicated that there is a message embedded in there? Given that a intelligence could recognize that there was a message embedded, could it eventually be deciphered?
- A tiny well behaved client that starts with the OS.
- It downloads rare bits of the archive at 1 kb/s obtaining 1 GB every 278 hours. It should stop around 100 MB to 5 GB.
- It periodically announces what chunks/documents it has.
- It seeds those chunks at 1 kb/s
- Chunks/documents that have thousands of seeds already are not announced. Eventually those are pruned.
This escalates the situation to the point where everyone can help without it costing anything.
If someone is trying to obtain a 20 mb pfd it would take 5 and a half hours using a single 1 kb seed. With just 50 seeds it's just 8 min.
Here's a blog post about our datastores for some background.
... essentially Polar is a PDF manager and knowledge repository for academics, scientists, intellectuals, etc.
One secondary challenge we have is allowing for sharing of research but I'd like to do it in a secure and distributed manner.
Some of our users are concerned about their eBooks being stored unencrypted and while for the majority of our users this will never be a problem I can see this being an issue in countries with political regimes that are hostile to open research.
In the US we have an issue of researchers being harassed over climate change btw. Having a way to encrypt your knowledge repository (ebooks) would help academic freedom as your employer or government couldn't force you to give them your repository.
But what if we went beyond this and provided a way to ADD documents to the repository from a site like LibGen?
Then we'd have the ability to easily, with one click, encrypt the document (end to end) and added it to our repository.
If we can add support for Polar to allow colleagues to share directly, this would be a virtual mirror of LibGen.
Alice could add books b1, b2, b3 to their repo, they could then share with Bob, only he would be able to see b1, b2, b3, then they would generate a shared symmetric key to share the books.
No 3rd party (including me) would have any knowledge what's going on.
I'm going to assume our users are not going to do anything nefarious or pirate any books. I'm also certain that they're confirming to the necessary laws ...
The challenge though is that while we'd be able to have a mirror of LibGen and more material, it would be a probabilistic mirror - I'm sure we'd have like 60% of it but the obscure material wouldn't be mirrored.
Right now our datastores support just local disk, and Firebase (which is Google Cloud basically). While we would encrypt the data end to end in Google Cloud I can totally understand why users might not like to use that platform.
One major issue is China where it's blocked.
Something like IPFS could go a long way to solving this but it's still very new and I haven't hacked on it much.
Indeed, what an intellectual tragedy..
> In August 2010, Google put out a blog post announcing that there were 129,864,880 books in the world. The company said they were going to scan them all.
That seems like a surprisingly "small" number.
Well, in trying to picture a physical library with 130 million books, maybe that's a realistic estimate. But compared to, say, the recently discovered data hoard of more than 2 billion online identities, it's miniscule.
SciHub and LibGen are truly the modern-day Library of Alexandria. The fact that they're being called "Pirate Bays of Science" - and that providing free and open access to all books in the world is illegal - just goes to show that our civilization's priorities are misdirected.
- The total number of books -- not titles, but actual bound volumes -- in Europe as of 1500 CE, was about 50,000. By 1800, the total was just under one billion.
- The library of the University of Paris circa 1000 CE comprised about 2,000 volumes. It was among the largest in Europe.
- The Library of Constantinople in the 5th century had 120,000 volumes, the largest in Europe at the time.
- A fair-sized city public library today has on the order of 300,000 volumes. A large university library generally a millon or so. The Harvard Library contains 20 million volumes. The University of California collection, across all ten campuses, totals more than 34 million volumes.
- The total surviving corpus of Greek literature is a few hundred titles. I believe many of those were only preserved through Arabic scholars, some possibly in Arabic translation, not the original greek.
- There's an online collection of cuneiform tablets. These generally correspond to a written page (or less) of text, with the largest collections numbering in the tens of thousands of items.
- As of about 1800, the library of the British Museum (now the British Library) had 50,000 volumes. Again, among the largest of its time.
- From roughly 1950 - 2000, roughly 300,000 titles were published annually in the United States and/or English-language editions. R.R. Bowker issues ISBNs and tracks this. From ~2005 onward, "nontraditional" books (self- / vanity-published) have been about or above 1 million annually.
- The US Library of Congress, the largest contemporary library in the world, holds 24 million books in its main collection (another 16 million in large type), and has 126 million catalogued items in total (2015).
- At about 5 MB per book, in PDF form, total storage for the 38 million volumes of the Library of Congress would be slightly under 200 TB. At about $50/TB, that's $10,000 of raw disk storage. (Actual provisioning costs would be higher.) Costs are falling at 15%/year.
- Total data in the world comprises far more than books, and has been doubling about every 2 years. Or stated inversely: half of all the recorded information of humankind was created in the past two years.
Some of this is off the top of my head, but partial support for the facts from:
> half of all the recorded information of humankind was created in the past two years
That is shocking to imagine, and it's exponentially growing.
It reminds me of Vannevar Bush's "As We May Think", pointing out the emerging information overload in society. It certainly puts things in perspective, how we (humanity) have been making a conscious, collaborative effort to develop globally networked computers, one of whose important functions is to help us organize all the information, including books.
The conundrum it seems is that technology is also a massive multiplier/amplifier of the amount of data, that its capacity to help us organize would never catch up to what it's helping to produce.
> total storage for the 38 million volumes of the Library of Congress would be slightly under 200 TB
I guess it's redundant to say, but I'm sure in the near future that would fit on a thumb drive!
I've been listening to Peter Adamson's "History of Philsophy Without Any Gaps" podcast, which is excellent, and spends a fair bit of time looking at the historiography of the topic -- what works were preserved, how, various interpretations, practices, preservation, and losses. Interesting to note that most of the preserved Greek and Roman works were found in obscure Arabian monastaries and libraries. The mainstream collections themselves were often lost in raids, fires, or other mishaps. Which makes the LibGen situation all the more relevant and urgent.
(I'm a huge user of the site and others like it, for what it's worth.)
On the amount of total data being captured: there's a huge difference between quantity and quality measures of information. They're almost certainly inversely related.
Of what books were written in antiquity, up to the time of the printing press, say, odds were fairly strong that a work would be read.
At 1 million new titles being published per year, there are only 330 people in the US per book, or roughly 400 native English speakers worldwide. (With ~2 billion speakers worldwide, the total audience might reach 2,000 per book). Clearly, most of what's being written will have a very small, or no, audience.
For machine-captured data, the likelihood that any of it is seen directly by a human is vanishingly small. More of it will undergo some level of machine processing or interpretation, though even that only applies to a fairly small fraction of data. Insert old joke about the WORN drive: write once, read never.
As for storage costs (and/or size), at a 15% cost reduction per year, storage halves every 4.67 years (4 years and 8 months), which means that in 10 years, the $10k price tag becomes $2k, and in 20 years, it should be under $400. For the entire Library of Congress collection.
Flash drives seem to be increasing in capacity by a factor of 10 every 2.5 years. There are now 2 TB flash drives, so 200 TB might be as little as 5 years out. That ... still sounds optimistic to me.
The more practical problems are simply organising, cataloguing, and accessing the archives. This is an area that still needs help.
1. I think that's from "Science and the Citizen*, 1943, though the BBC and I have a disagreement concerning access. https://www.bbc.co.uk/archive/hg-wells--science-and-the-citi...
"Among some excellent men, there were some weak, average, and absolutely bad ones. From this mixture in the publication, we find the draft of a schoolboy next to a masterpiece." — Denis Diderot
Taking the quote out of context (and aside from its historical male-centered language) - it sure rings true of the current state of the web, as well as books.
About the inverse relationship of quantity vs quality, we seem to be drowning in quantity! As you've pointed out, there's great need for thoughtful organization and curation.
I like how you break down the quantifiable aspects to draw a historical trend and future projection. The rise of "data science" and "big data" in the past few decades really makes sense in this light.
I'm sure machine learning and "AI" will play an increasing role in the task of organizing and processing all this information, but at the bottom I feel that the most value probably comes from human curation.
LibGen has been an amazing resource for me as a lover of knowledge, a life-long book worm. I've got bookshelves and boxes full of physical books as well, but it's a drop in the ocean..
"As long as the centuries continue to unfold, the number of books will grow continually, and one can predict that a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe. It will be almost as convenient to search for some bit of truth concealed in nature as it will be to find it hidden away in an immense multitude of bound volumes. When that time comes, a project, until then neglected because the need for it was not felt, will have to be undertaken...."
... and on for another several paragraphs. It's an extraordinarily keen observation on the state and future of knowledge. At the always excellent History of Information website:
(Diderot is on my list of authors to explore in more depth.)
The fact that the quality of any given information or exchange is often (though not always) entirely divorced from its source (or author) is another interesting note. There are a few points here worth expanding on.
At least probabalistically, there are spaces (real or virtual) in which it's more likely to encounter good ideas. HN for its various failings, does well in today's Net. Google+, for all its faults, was similarly useful.
Size matters far less than selection. The tendency for centres of learning, research, and/or inquiry (and not necessarily in that order) to emerge is one that's been long observed, and their durability remarkable. The first universities (Bologna, Padua, Oxford, Paris, Cambridge, Heidelberg, and others, see: https://en.wikipedia.org/wiki/Medieval_university) are often still, 600 - 700 years later among the best in the world. Certainly in the US, Harvard, Yale, Princeton, M.I.T., among the earliest founded, remain the most prestigious. Though as noted in the conversation with Tyler Cowen and Patrick Collison, the list from 1920 is "completely the same, except we’ve added on California".
What happens as the overal quantity and flux of information increases is that more effective rejection systems are required. That is: you've got too much information flowing in, you want a way to cheaply, with minimal effort or consequential residiual load, reject information that may be irrelevant, with minimal bias.
There are numerous systems that have been arrived at, and many of our cognitive biases or informal tests for truth arise out of these (optimism, pessimism, availability, sunk-cost, tradition, popularity, socio-ethnic prejudice, etc.). Randomised methods are probably far fairer and less prone to category error. Michael Schulson's sortition essay in Aeon remains among the best articles I've read in the past decade, if not several:
"If You Can't Choose Wisely, Choose Randomly"
Another fundamental problem is self-dealing and self-selection within institutions. Much of the failure within academia (also touched on by Cowen and Collison, who, I'll note, I don't generally agree with, though they are touching on and making many points I've been pursuing for some years) comes from the fact that it's internal selection of students, faculty, articles, topics, and ideologies, rather than strict tests of real-world validity, which promote these structures.
The same problems infect government and business -- it's not as if any one social domain is immune to this.
Oh, and another lecture by H.G. Wells on that topic:
"...When I go to see my government in Westminster I find presiding over it the Speaker in a wig and a costume of the time of Dean Swift, the procedure is in its essence very much the same. The Members debate bring motions and when they divide the art of counting still in governing bodies being in its infancy they crowd into lobbies and are counted just as a drover would have counted his sheep two thousand years ago...."
(Audio quality is exceptionally poor, 1931 recording.)
Partial transcript: http://www.aparchive.com/metadata/INTERVIEW-WITH-H-G-WELLS-S...
AI ... may be useful, but seems to be result-without-explanation, a possible new form of knowledge, to go with revelation (pervasive if not particularly acurate), technical (means), and scientific (causes / structural).
Wholehearted agreement on LibGen.
Very enjoyable conversation BTW, thank you.
Briefly: the article distinguishes "endocrinal" vs. "distributed" decisionmaking.
This applies at some levels, but not at others.
For individual humans, we don't have the option of rewiring our concsiousnesses, which are rather pathetically single-threaded, and can at best multitask poorly by task-switching, at a very great loss of task proficiency.
Even withing collective organisations (companies, governments, organisations, communities), the multiple independent actors works where those actors' actions are autonomous and independent of others. Or, in the alternative, where they work without mutual conflict toward a common goal.
But you get problems where either individual actors' motivations and actions are in conflict, or in which a single global decision must be made (as with various global catastrophic risks), and multiple independent decisions cannot be arrived at. Even for noncritical arbitrary decisions, such as which side of the road to drive on, in which there is no compelling argument to be made for one side or the other, but in which both sides cannot be simultaneously selected, you need some global decisionmaking capacity.
When you reach the point of either an existing decisionmaking system (as in: a single human, with the finite and largely immutable information acquisition and processing capabilities corresponding), or a multi-agent system which must reach a common decision, you've got the challenge of limiting data intake to that amount which allows effective function within the environment, and avoids overloading capabilities or ineffective action.
It's an attractive concept, that human society is structurally similar to a brain, and that an individual is a neuron. (If humanity is the brain, I suppose the rest of the Earth is the body. We're not doing too well as the self-appointed brain of the operation.)
My first reaction to the analogy of "endocrinal" (one-to-many) and "neural" (many-to-many) decision making, is that it's missing a primal psychological/biological motivation of humans to seek to dominate others of its own kind as well as all of nature. I'm not familiar enough with biology to say definitively, but I'm pretty sure the endocrinal system does not actively seek to subjugate the neural system (or vice versa) and dominate the whole body.
Social organization, it seems to me, is more a function of power, very small groups gaining advantage and dominance over vastly larger groups of people, than that of collaboration for mutual benefit. (I might be a bit too cynical of political motivations and authentic democracy these days.)
From the final paragraph:
> ..the current global brain is only tenuously linked to the organs of international power. Political, economic and military power remains insulated from the global brain, and powerful individuals can be expected to cling tightly to the endocrine model of control and information exchange.
I'd disagree with this, and say that the global brain (if we mean the Internet and its empowerment of globally networked intelligence) was born from the wombs of "political, economic and military power". It never achieved escape velocity to become a truly free, autonomous and collaborative, neural model of decision making.
To backtrack a bit:
> Well-connected collective entities like Google and Wikipedia will play the role of brainstem nuclei to which all other information nexuses must adapt.
The most powerfully well-connected collective entities are international political/financial/corporate entities, and indeed do they more or less dictate how all information nexuses (nexii?) must adapt.
One biological analogy that comes to mind, is how propaganda and "disinformation" act like neurotoxins in the social brain, introducing noise/entropy, skewing its coherence, and preventing well-informed and orchestrated cooperation.
Another is how established political powers have a well-developed "immune system", composed of mass media, legal structures, military/police force, surveillance of the public. This immune system could be seen at work, for example, at the environmental protests at the Standing Rock Indian Reservation.
The final sentence of the article:
> This formidable design task is left up to us.
By this I assume the author means, evolving the global brain. Quite a challenge! From my perspective, it's going to be a historic struggle: design or be designed.
I wonder why the Copyright Office didn't just buy Google Books, would only have cost a few hundred million $ ?
This is where the project derailed and never quite recovered.
> As Tim Wu pointed out in a 2003 law review article, what usually becomes of these battles—what happened with piano rolls, with records, with radio, and with cable—isn’t that copyright holders squash the new technology. Instead, they cut a deal and start making money from it.
> now, in 2011, there was a plan—a plan that seemed to work equally well for everyone at the table
> DOJ’s intervention likely spelled the end of the settlement agreement. No one is quite sure why the DOJ decided to take a stand instead of remaining neutral. Dan Clancy, the Google engineering lead on the project who helped design the settlement, thinks that it was a particular brand of objector—not Google’s competitors but “sympathetic entities” you’d think would be in favor of it, like library enthusiasts, academic authors, and so on—that ultimately flipped the DOJ.
P.S.: Might be soon, considering that it was basically what Google was initially about, and one of the founders just resigned from Alphabet...