Well, for my side project, https://www.bilinual.com I needed version controlling for changes that are made on a book (fixed typos, ...) and I found GITenberg project. There are several interesting tools developed during the project accessible here: https://github.com/gitenberg-dev
OP here, I doubt that the website is "a SUBSET of Project Gutenberg". This is not mentioned anywhere in the website.
As mentioned in PG website, while you can use the book freely, "The name 'Project Gutenberg' is a registered trademark.". I guess this is the reason they didn't mention PG.
Now I remember, that was the reason I wasn't more clear about the connection to PG when I wrote the website https://github.com/gitenberg-dev/giten_site/. Since then my co-founder Eric Hellman has been doing engineering work for Project Gutenberg, as well as running the rest of the Free Ebook Foundation, which is the parent org of GITenberg, free-programing-ebooks, and Unglue.it.
I think that the GITenberg collection contains all of the books in PG. At this point, the creation of new repos is automatically done when Distributed Proofreaders creates a new book in PG. Originally, I didn't include around 400 PG books due to their creators claiming copyright, and didn't include Bruce Sterling's book because he wouldn't let me re-license it creative commons rather than his pseudo-public-domain license.
Not much has been happening with GITenberg itself in the past few years. But luckily, a lot of the concepts and code are getting upstreamed into PG. Which in my opinion, is way way better.
OP here, it seems standardebooks doesn't provide the books in PDF format. My side project, https://www.bilinual.com rebuild the ebooks in PDF format with translation hints, if you don't mind about learning a new language while reading your favourite books ;)
There are about 400 GITenberg books that have CC-by licensed covers provided by Recovering the Classics. If you're interested in using that art for your PDFs I can find you the index!
We have this problem with Standard Ebooks. The number of people that say things are public domain without actually checking it is very high. A CC0 licence is an explicit grant of public domain status by the licensor, and hence the legal issues rest with them in the event of any problem.
Public domain obviously can be ascertained, but if CC0 hasn’t been granted we rely on dated reproductions: basically a photograph of the artwork in question in a book or journal with a copyright date of 1924 or earlier.
(that’s obviously specifically a US legal reading, but SE from a legal point of view is a US project)
The difference between falsely claiming something is public domain and falsely claiming to grant a license to it under CC0 is going to be pretty minimal (and is likely result in little more than "please stop", blood and turnips and so on).
OP here, I found this website when I was looking for a way to get the updated version (with correction) of a PG (Project Gutenberg) book and all changes/diffs from the point that I scrape the book from PG website for my language learning side project: https://www.bilinual.com
The bilinual Project also rebuild ebooks (modern HTML, PDF and open ePUB format) with better quality and readability( while it is not its prime goal). Take a look at one example here:
For example, many words don't have translations at all, and those that do are often incorrect. This feels like a very rough machine translation? For example:
> et c'est surtout dans les paroisses riveraines du Saint-Laurent
You translate this
> and Ce east primarily in the · · some saint Laurence
While Google Translate gives
> and it is especially in the parishes bordering the St.Lawrence
If you're using machine translation, why not use a Google API that might give usable results at least? If that's not plausible, maybe you should try to get together a team of volunteers to manually translate these ebooks for language learners?
(I hope these suggestions are helpful, I'm not trying to be dismissive of your project.)
1- 404 issue: I implemented the PDF generation recently and I noticed that WeasyPrint has issue with html files that have too many tags (our books have around 2*number_of_words tags in them). This is not a big issue and it will be fixed soon in the next iteration.
2- Using Google API: Google APIs and other translation tools are great for translating sentences. However, the problem with use of parallel texts for language learning is our brain laziness. After few pages, our brain looses its patient to solve the translation problems (critical thinking!?) and actually learn words and structure of sentences. The focus immediately goes toward translated sentences in your native language rather than the original text.
Personally, I learn a word for a life when I slow down and think about similar words, its root, and at the end looking it up in a dictionary. The process is valuable.
3- Team of volunteers: It is easier said than done. The functionality is present but I prefer to improve the suggestion engine as much as possible before I involve volunteers. Are you interested to join?
'hermanos' is translated 'brethren' - super-archaic.
p8, 'dedicado' in 'te has dedicado a pintar?' is translated 'hardcore'.
'sí' in 'Que sí, hombre' is translated 'do', as in do-re-mi, I guess.
p5,11-12 has 'quieres/quiero repeatedly translated as "with friends like those who needs enemies". Which is just inexplicable. I can't imagine how that would happen.
Corrupted dictionary?
..and most of the trickiest words on a page aren't translated, maybe because not in your dictionary or they have 'lo' or 'se' appended.
Thanks, we are working to improve the quality of both our dictionaries and ML engine. Very hard to answer they the translation picked these and I have to look into each of these individually to answer your questions. The translations are not perfect but it is alive project and I am trying to improve it every hour that I find.
"I can't imagine how that would happen." : Just as a hint, click on "translations" here:
Yeah, something is going very wrong. As if they were trying to tokenize html/pdf, pulling in a lot of extraneous characters/bytes, and using some sort of homebrew ML project to translate it. I don't know how else you'd get such bizarre results.
Every day. Seriously consider using SQL to store your data.
Are you in doubt between a plain file and sqlserver and you think it doesn't worth the hassle of installing a full blown sql server, making backups regularly, etc.? use sqlite.
I don't like this phone since lorem ipsum dolor sit amet, sed do eiusmod videus chatum incididunt ut labore et dolore magna aliqua.
Seriously, does it even support rutrum tellus pellentesque eu tortor lowlightena capturum nulla? Apple Iphone eget blurtutate bokehus at tellus at urna condimentum mattis pellentesque?