Hacker Newsnew | past | comments | ask | show | jobs | submit | nswango's commentslogin

Agreed, the article author is just gatekeeping as far as I'm concerned.

Most books available on PoD wouldn't be available at all without it. Not just less well known reissues but also new interesting books with limited readership, and books which larger publishers would ignore because of their own prejudices.

There are more luxury editions of classics than ever so quality-sensitive book collectors are still being catered for. And it's easier than ever to find secondhand copies of old books.


It's usually extremely prominent if the book is an India only edition.

For sure, if both Amazon and the seller are doing things the way they are supposed to be. There is a small chance, however, if the OP either purchases from an unfaithful third party seller, or somehow inadvertently from a third party seller even though they purchased from Amazon.com, due to how inventory is mixed (Amazon kept doing this for years).

I mean, it will be very obvious from the physical book itself.

> If you have an older, low-volume book, providing a shoddy version will make you more money than letting it go out of print.

From my point of view, what you are describing is "if you're the owner of an interesting but niche work, making it available in a basic version will please a lot of people who want to buy and read it".

The alternative to most of these 'shoddy versions' from reputable publishers is simply no version at all. Not sure why the author of the article wants to enforce this on people who actually want to read these books, rather than ooh over print quality and hoard them as luxury objects.


Most of these are also available in ebook (free ebooks, in the case of public domain works like the Bertrand Russell), which makes me think that the people who don't value paper books in-and-of-themselves probably aren't buying the shoddy paperbacks either.

For someone who specifically likes the experience of a paper books, the option of a better print (or at least disclosure of the print quality) is highly desirable


My comment and the part of the previous comment I was replying to are explicitly about works in copyright.

For public domain works, poor quality printed copies could never be criticized as crowding out better quality copies.


Sure. I'm not arguing it's fundamentally bad. But it's going to leave some buyers unhappy because nowadays, the point of paperbacks is that you're paying extra for a reading experience, not the text itself. An ebook is always less (or free).

If you order from your local bookstore a book which is being sold on Amazon as a PoD copy by a major publisher, what do you think happens?

They don't have a separate manufacturing process for mom-and-pop bookstores. Amazon do the printing and the logistics but deliver the book to the store instead of to your house so that the store can hand it to you and collect a very small amount of money.


> If you order from your local bookstore a book which is being sold on Amazon as a PoD copy by a major publisher, what do you think happens?

Nothing. Local bookstores (not just 'mom-and-pop', but national chains or cooperatives) would tend not to have that title available. Is that a US thing that they would order from Amazon? Printing-on-demand is potentially interesting, but just not a thing for most titles.


I disagree.

The bookshop would order the book from the distributor, who would get a copy ultimately from Amazon.

The books printed-on-demand by Amazon and sold directly by them are also sold via the traditional supply chain.


biblio.org is a good alternative where I am (although personally I don't see the problem with having either the print-on-demand books or buying used from Amazon as an option).

sqlite is an extreme outlier not a typical example, with regard to test suite size and coverage.

For a long time the standard way of loading JSON was using eval.

Not that long, browsers implemented JSON.parse() back in 2009. JSON was only invented back in 2001 and took a while to become popular. It was a very short window more than a decade ago when eval made sense here.

Eval for json also lead to other security issues like XSSI.


Problem is, it took until around 2016 for IE6 to be fully dead, so people continued to justify these hacks for a long time. Horrifying times.

And why do we not anymore make use of it, but instead implemented separate JSON loading functionality in JavaScript? Can you think of any reasons beyond performance?

I'd be surprised if there is a performance benefit of processing json with eval(). Browsers optimize the heck out of JSON.

You are arguing against the opposite of what the comment you answered to said.

Am i? "Can you think of any reasons beyond performance?" implies that the comment author thinks performance would be a valid reason.

Quoting my original message:

> And why do we not anymore make use of it, but instead implemented separate JSON loading functionality in JavaScript?

In other words: I'm asking for reasons why was native JSON JavaScript module created, if we already had eval.

> Can you think of any reasons beyond performance?

One of the reasons is that native JSON parser is faster than eval: give some other reason.


Why did you opt in for such a comment while a straight forward response without belittling tone would have achieved the same?

I actually gave it some thought. I had written the actual reason first, but I realized that the person I was responding to must know this, yet keeps arguing in that eval is just fine.

I would say they are arguing that in bad faith, so I wanted to enter a dialogue where they are either forced to agree, or more likely, not respond at all.



So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?

And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?


> So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?

Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.

> And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?

Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?

Those Unicode homonyms are a solution looking for a problem.


> Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.

Do you think 1, l and I should be encoded as the same character, or does this logic only extend to characters pesky foreigners use.


They are visually distinct to the reader.

That is entirely dependent on the font.

Unicode is about semantics not appearance. If you don't need semantics then use something different.

> Unicode is about semantics not appearance.

And that's where it went off the rails into lala land. 'a' can have all kinds of distinct meanings. How are you going to make that work? It's hopeless.


It already works.

Tell me what the problem is and what your proposed solution would be.


Infer the meaning from the context.

    a) it's a bullet point
    b) a+b means a is a variable
    c) apple means a means the sound "aaaah"
    d) ape means a means the sound "aye"
    e) 0xa means a means "10"
    f) "a" on my test paper means I did well on it
    g) grade "a" means I bought the good bolts
    h) "achtung" means it's a German "a"
I didn't need 8 different Unicode characters. And so on.

Your trolling is really rock bottom. All this already works fine. Millions of times, each day. Just once a week it fails because someone messed up. Not an issue.

I showed that there is no need for semantic information about the glyphs. It's more compelling to demonstrate a need for semantic information rather than just asserting it.

so you contradict yourself because your context window is exhausted?

Since you insist on being rude, I shall exit.

>Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?

I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.

>should not be about semantic meaning,

It's always better to be able to preserve more information in a text and not less.


> I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.

They look visually distinct to me. I don't get your point.

> It's always better to be able to preserve more information in a text and not less.

Text should not lose information by printing it and then OCR'ing it.


But these characters only look identical in some fonts. Are you saying that if you change font, some characters in a string should change appearance and others should not?

And what about the round-trip rule?

And ligatures? Aren't those a semantic distinction?


> But these characters only look identical in some fonts.

That's a problem with the fonts.

> And what about the round-trip rule?

Print Unicode on paper, then ocr it, and you'll get different Unicode. Oh, and normalization.

> ligatures

Generally an issue with rendering.

> semantic distinction

Unicode isn't about semantics (or shouldn't be). Consider 'a'. It's used for all kinds of meanings.


What about numbers? Would they be assigned to arabic only? I guess someone will be offended by that.

While at it we could also unify I, | and l. It's too confusing sometimes.


> While at it we could also unify I, | and l. It's too confusing sometimes.

They render differently, so it's not a problem.


They only render differently in some fonts, on some displays.

totally not true :D

Look again at its rendering!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: