I don't think these are very good rules.
I thought it was especially amusing that this person doesn't trust journalists to write about anything but journalism. If you believe that, why are you interested in reading about journalism in the first place?
HN has an unfortunate fixation on Michael Crichton's supposed Gell-Mann amnesia effect (wherein you read something in the news that pertains to your field, spot errors, get angry, and then forget that happened when you go on to read the next story). I propose the countervailing Djikstra amnesia effect, wherein a technical professional produces workmanlike output with all the attendant errors and omissions that attach to any work produced by humans (Christ knows our field is intimately acquainted with errors and omissions), then forgets they did that, and expects every other professional to measure up to the standard they themselves failed to meet.
Or perhaps their fallibility in the field and readiness to acknowledge their shortcomings in it promotes lots of cooperation with and review by experts. You note narrowness, but what's more narrow than expecting a single expert in one or more fields to have a comprehensive view about anything, even their own subject much of the time? I can't think of a single person in computer science that I would trust to know everything about that topic, so why would I trust that person to cover it well over someone that has spent a career coordinating information from different people and attempting to distill it to an audience that may not be familiar with it?
And to be clear, I don't see computer science as special in this regard. I think most things interesting enough to cover are probably complex enough that multiple people are likely required to get a full picture of it.
I never said I don't read much non-fiction. I spend nearly all my time reading non-fiction. I said I don't spend 'huge' amounts of time reading popular non-fiction.
1. The reasoning holds independent of how much I have read.
2. I'm not unfamiliar with books written by journalists.
* All the shah’s men, by Stephen Kinzer, about how the 1953 coup against the democratically elected prime minister of Iran Mohammad Mosaddegh was orchestrated.
* The Idea Factory, by John Gertnee, about the history of Bell Labs
* A mind at play, by Jimmy Soni and Rob Goodman, about Claude Shannon
* Ike’s bluff, by Evan Thomas, about Dwight Eisenhower
* The Wise Men, by Walter Isacsson and Evan Thomas, about diplomats during the Truman administration
I tend to be more skeptical about books written by journalists that relate to some specific topic rather than historical narrative. Additionally, I tend to read the historical narratives with more skepticism than I do when I read history books written by professional historians.
There’s a reason the “criticisms” section of the wiki is one of the largest and links to a full article on it. It’s a religion, not an evidence-driven theory. Prescriptive, not descriptive.
I suppose we would agree to disagree if you believe in ideas not grounded in reality (a.k.a. backed by empirical evidence).
Anyway, I definitely second your point. I wonder if “Capital in the 21st Century” will be relevant for the entire century (although doubtlessly won’t be as consequential as Marx’s series).
How about another example: Bill Bryson. He's a brilliant writer and I really like all of his books. Except for A Short History of Nearly Everything. That's mostly a regurgitation of other popular science writers; it is a good read, but if you are fond of the genre, you'll spot his sources and it kind of ruins the effect.
Compare those with J.E. Gordon on structures and materials, John Clark's Ignition (which is a collection of amusing anecdotes and settling scores, I admit; the closest in my field is M.A. Padlipsky), Peter Ward (Gorgon), or Mark McMenamin (The Garden of Ediacara, some thumbs up).
If you want to know the story behind some event, journalists are admirably well suited to tell it. If you want to know about some field, they really aren't. In a world full of books, reading a journalist writing about phlebotomy or biochemistry is probably not all that useful.
I'm definitely in agreement about your Dijkstra amnesia effect.
Read the book! It is a excellent read. There are technical details in it a-plenty.
_When Genius Failed_ isn't simply a description of a scandal, but also a pop-grade exploration of the technical factors that led to the scandal. The book is impossible to write without conversance with its technical subject matter. Which is my point: Lowenstein wasn't a quant, but he was able to accurately and effectively write on them.
For example, when discussing LTCM volatility strategies:
>The stock market, for instance, typically varies by about 15 percent to 20 percent a year. Now and then, the market might be more volatile, but it will always revert to form—or so the mathematicians in Greenwich believed. It was guided by the unseen law of large numbers, which assured the world of a normal distribution of brown cows and spotted cows and quiet trading days and market crashes. For Long-Term’s professors, with their supreme faith in markets, this was written in stone. It flowed from their Mertonian view of markets as efficient machines that spit out new prices with all the random logic of heat molecules dispersing through a cloud.
From this quote it looks like Lowenstein believed that
1. Volatility is not mean reverting
2. The reason people at LTCM believed volatility was mean reverting was due to complex mathematical models rooted in market efficiency.
Now, volatility mean reversion is something that can be easily seen by looking at a graph of volatility over time .
Also, Lowenstein does not do a good job at describing the way LTCM modeled risk, and the way it failed. In fact, he barely even tries. He just hints here and there about "correlations going to one", but nothing more.
For example, which assumptions failed? Did they assume that different kind of bets (relative value between bonds, merger arbitrage, arbitrage between double-listed equities, etc) were uncorrelated?
Did they get hurt more by different kind of bets being correlated, or by some bets going particularly wrong? Did they stress test their risk measures in any way?
 E.g. https://a.c-dn.net/b/4tKv1V/Forex-Trading-Video-SPX-and-VIX-...
You are in no position to judge how accurately or effectively it was written (unless you judge effectiveness as entertaining you).
It doesn't help that everyone's attention is being hacked simultaneously. Nor does it help that the internet allows you to reach everyone on the planet, whether you are worthy of that reach or whether they are worthy of your words.
All I know is in such an environment in flux pointing out hypocrisy is a waste of time. Let it play out. Give it another 10 years.
So, your argument against a rule of thumb - a broad, sweeping heuristic never meant to account for all possibilities, and by definition not to account for outliers - is that outliers exist?
Well, I readily grant you: every rule of thumb, ever, has failed miserably at accounting for outliers.
Seeing as how the post you're addressing was written how to triage the endless flux of books given finite time, I rather think their point was, you know, how best to handle the general case - not the outliers.
Rather like, "I will never read fiction whose cover is a sexy lady in all leather holding a knife and leaning on a motorcycle." It's not only likely, but certain, that that will end up excluding really good fiction. But, playing odds on where to devote my time, it's a better bet to avoid such books than not, given my reading preferences.
And I'm very comfortable with the rule that "non-experts attempting to digest and relate complex topics will tend to (a) not be experts, and thus (b) misunderstand, and (c) relate that misunderstanding to their readers, on average, and certainly are far more likely to than relevant experts." E.g., Malcolm Gladwell, who has an excellent reputation for his writing, just so long as you're not familiar with any of the things he writes about. Or 99% of pop-sci books ever written on the topic of quantum physics.
"Don't read books by journalists that aren't about journalism" is a bad rule of thumb.
Ever since YC had an RFS about replacing Wikipedia I’ve thought about how it needs to change. There are many shortcomings including the fact that it is hard to edit for a number of reasons, but most importantly going to a Wikipedia page at one point in time only captures that information which may be right at the time or wildly off or as you point out full of omissions. Information shouldn’t be deleted but instead we should see meta information like how other experts weigh in. If someone points to the Wait But Why article on AI it will be backed by the author and Elon Musk (who are both dilettantes), and maybe we can see some experts gave it a low rating.
Part of the problem I see is that often so called experts are just those who are rabidly vocal while the actual experts are heads down becoming better experts. So while we might see hand waving about killer AI from several “experts” we only have a few people who occasionally interject like Rodney Brooks. (This is a bit why I’m not very optimistic about several of these startups trying to show consumers “the truth;” indeed, the Dijkstra amnesia effect.)
Wow. Lord knows Wikipedia has problems, but replacing it with a VC-backed startup would be a disaster.
Then there are things like economics where even the experts in their field can’t agree on either theory or in data interpretation. If they can’t reach consensus being informed and all, imagine someone in a different field like journalism trying to write expertly.
As we can obviously see, even professionals in medical sciences were wrong about Theranos. How did John Carreyrou, a WSJ reporter, do such a good job reporting on them?
It was highly read and picked up by armchair experts everywhere leading to lots of outrage against... not much in particular other than evil Wall Street people stealing muh money.
The Big Short is in a similar category although it wasn’t really an ad so much as an excuse for everyone who overleveraged on their mortgage to pat themselves on the back and shirk the responsibility of financial ignorance onto “evil Wall Street people”. The movie (while entertaining) was even worse.
>As we can obviously see, even professionals in medical sciences were wrong about Theranos. How did John Carreyrou, a WSJ reporter, do such a good job reporting on them?
Simple, medical professionals aren’t equipped to sniff out business fraud signals like investigators of businesses. Given that Theranos didn’t have technology, it didn’t take any more medical expertise to explain that then it took medical expertise to explain Bernie Madoff.
W/re economics. We have the likes of Fukuyama and Krugman who have influence on Americans with regards to economics but who are often taken as oracles, rather than accepting they are economics thinkers who can be wrong.
My argument isn't "all books are good". That would be a stupid argument. My argument is that these rules suck.
Whether the particular books you refer to are false negatives on these heuristics is of course a matter of opinion. I don't know Woodward and Bernstein's book on constitutional law; their most famous book, All the President's Men, would surely fall under a similar exception to biography/memoir, since it is an account of their own investigation.
A book about investigative journalism by investigative journalists is rather different.
Edit: ... which obviously doesn't make the book any less interesting.
In other words there's no reason to believe the author would miss any of the books you mentioned by his "journalist disdain".
Probably an accurate statement really.
Neither of their books were really about constitutional law. "All the President's Men" is in the "Books about journalism" section of Wikipedia.
If you want a replacement example that isn't make that point, substitute in Lawrence Wright and either scientology or terrorism.
So yes, mistakes are expected initially. But the job of the author is to get enough outside eyes to fix it. Historians go to great lengths to ensure they aren’t just repeating a single view of an event. Most authors do not seem to make any similar effort.
Also, the idea that the errors software developers make don't have a broad impact on society is a bit of an eyebrow-raiser for me, but that might be a function of the subfield of software that I happen to work in.
It’s also an eyebrow raiser to me, given that’s not even close to what I said. “Having an impact != misrepresent”.
I suspect I’ll be a long time waiting for you to make any correction to your comment that misrepresented what I said though, which is funny given the conversation.
I readily acknowledge that I am not an expert in the fields I write about.
That is why the bulk of my work is finding out who the experts are and presenting their work in an accessible way. (Granted, once you've been writing about a certain area for long enough, you do tend to become educated about it.)
One of the things that makes me feel good about what I do is that I am able to expose readers to intriguing information they might otherwise never encounter, unless they've got subscriptions to a bunch of academic journals in fields of study outside of their own.
Keep in mind, it's not a given that people who are the most well-respected experts in their field are also talented writers. People like Douglas Hofstadter certainly are both, but not every expert is like him. And if you follow Rule #1 religiously, it sounds as if you're limiting yourself to experts who are both. In the process, you're likely missing out on a lot of cool information, just because you are only willing to read first-hand accounts.
There's a lot of day light between journalists and celebrity pundits.
-- Donald Knuth in "Things a Computer Scientist Rarely Talks About"
I also try follow some other general rules, that work for me:
* Read several books at once, esp. across disciplines.
* Read paper books.
* You don't need to finish books. Stopping mid-way is fine (still have problems with this!)
* Seek out durable works over bestsellers (https://en.wikipedia.org/wiki/Lindy_effect)
* Read across disciplines
* Write in books and make notes. Write up notes a couple of weeks after finishing (create your own commonplace book)
* Avoid audiobooks (if you want to retain the content). I just can't retain when I listen while driving/multitasking, but like listening to fiction for fun.
* Tag interesting books/papers cited in the books you like. Look them up and read them too.
* Find interesting/prolific readers on Goodreads. Lookup the books they read, esp. the ones you've never heard of.
* Let other people know that you like reading, and ask what they've read recently. When they read interesting books, they'll recommend them to you.
So my current method is now to read the books that past-me thought sounded interesting.
I like this Umberto Eco anecdote: https://fs.blog/2013/06/the-antilibrary/. Having a lot of unread books around is a good reminder of how much there is to read, learn, and experience. And it makes reading instead of turning to Netflix an easier decision.
That's fine if you can afford it ;-) Living on a student's budget, I limit myself to two new books a month, or sometimes three. But then I make sure I pick good books, and really try my very best to finish them. Doesn't always work, but it does mean that I've completely read ~90% of the books on my shelves. (Although I will often read books in parallel, switching as the mood strikes.)
I started my "two books a month" habit about three years ago and have found it a valuable habit to have. I've read some excellent books, learnt a ton (from widely different fields) - and there is a certain joy of anticipation in carefully selecting "this month's books".
I think this is important with non-fiction. A lot of books can be wrapped up nicely in 80 pages, but the publisher wants 300. So they get a lot of unneeded padding at the end.
> interesting/prolific readers on Goodreads
Could you share a few?
No books with the author's name written in a larger font than the title.
This eliminates books who's main merit is the author's fame. It's especially good at filtering out crappy New York Times best sellers and it works for fiction too.
> it works for fiction too.
Nonfiction authors usually write on a small handful of subjects, and chances are you're looking for a book on a particular subject and will compile a list of contenders, then do some light research on each author to gauge their authority on a subject.
But with fiction, you aren't likely to know the general contents of a book before you read it. You may be looking for a particular genre, but not a particular story. If someone like Stephen King or Isaac Asimov have proven that they are capable writers within their genres, it's actually beneficial sales-wise for these names to stand out in a book rack. If I see a bunch of books on a rack and one of them says Asimov then I'm honing in on that book first.
This is especially true for business/self-improvement type books, where I've found its almost never really time efficient when I'm really just looking for the list of 5 things I should be doing and skipping the extraneous pages of anecdotes.
Perhaps there is a dimension about how rigorous the thinking is behind the book. I struggle to imagine a YouTube video that could effectively and convincingly unpack ideas from The Intelligent Investor, The Sovereign Individual, Sapiens, etc. Other topics like "How to get rich with x" or pop-sci covered by the likes of Kurzgesagt are simplistic enough for a video essay, but those are seldom worth consuming regardless of medium.
* If it's a somewhat technical topic, go straight for the textbook/papers. If you don't understand them, you won't understand them any better from reading the "pop" material. If it's too complex (say, quantum physics), walk back and get acquainted with more basic material.
* If it's not technical, e.g. popular non-fiction books, listen to a (couple) podcasts. If it sounds like there's more to it than the 5 bullet points the author keeps repeating, get the book. That doesn't happen very often.
OK, I've been holding this in and now I finally have an excuse to put it out there:
Trying to talk about quantum physics without math makes it more confusing, not less. You inevitably end up making some weird analogies which aren't analogous, things which an expert might be able to reverse-engineer into the actual concepts but which put a non-expert in a Lewis Carroll Bullshitland, which is like Wonderland only not as amusing and definitely not worth putting in a book, let alone a "physics" book.
At worst, you end up with crap which is actively wrong, like everything Deepak Chopra has ever said in his entire existence.
Meanwhile, you can give people a real understanding of basic quantum mechanics with high-school algebra and a bit of simple logic.
The deep reason behind this is the same damned interpretation problem physicists have failed to solve for nigh-on a century now. We have the math, we know it works, and, miracle of miracles, we can do some real physics with fairly simple mathematical models, but we don't know exactly how the math hooks up with reality. If none of Dirac, Bohm, Feynman, and Pauling could definitively solve this problem, the odds of a pop science author doing so are not worth thinking about.
To drag this back to the topic: A book about quantum physics which includes no math isn't worth reading.
I had "Quantum Mechanics: The Theoretical Minimum" in mind, but I forgot it used some simple calculus, too. It's easy enough to bootstrap from high-school algebra to the kind of calculus it uses, but my statement wasn't correct for that book.
But I'm being unfair: The volume on quantum mechanics is the second volume, and both differentiation and integration are explicitly explained in the first, on classical mechanics.
And there's a difference between using an equation and deriving it. If you don't expect to derive equations, you can still understand quantum mechanics in terms of state vectors, matrix operators, and complex amplitudes turning into probabilities without explicitly using a Lagrangian, which does unavoidably require calculus.
(And, yes, I consider basic matrix algebra and complex numbers to be high school algebra.)
This goes for basically anything you purchase online. Read a couple highly positive reviews, read a couple highly negative reviews, then read some somewhat positive and somewhat negative reviews. Get the entire spectrum.
Emotions or lack of consideration cause some people to leave inaccurate ratings but still give useful information in the reviews themselves. Therefore you should never trust a 5-star or 1-star rating, but you should still consider why the reviewer felt compelled to leave such a rating.
At the same time, less extreme ratings might provide a fair and comprehensive assessment but the reviewer might have overlooked a particular edge case or issue.
I contend that only negative reviews have information; positive ones are propaganda you read with excitement to reinforce your emotional feeling of "I want this thing to enhance my life, I want to be part of the people experiencing this 5-star feeling, I'm dreaming of who I can be if I own this product, let me join in!", they don't tell you useful things.
If you read the negative reviews looking for dealbreakers, and think you can live with the defects described, then it might be a good enough buy. If you want to read the negative reviews with an emotional view, you can use it to reject the dream and stop wanting to buy the item or anything like it entirely, but that's not mandatory.
For applications, there are lot of 1 star reviews in line of "the author did not translate this into my language so 1/5" or "I used this in a completely wrong manner and I failed".
For physical goods I mostly agree. Especially on Amazon if I see an item with a very controversial distribution I assume that half of the shipped products are fakes which Amazon lets slide.
If it has a "surprise strength" which the company who made it didn't notice, and didn't advertise, that's also something which probably brought you to it by referral (like a DVD player which is region unlocked - the reason you're looking at that one, is because it was linked on a forum, and now you read the weaknesses.
You want a bluetooth speaker, you find all the ones you can, then look for the negative reviews of which have poor battery life and which have weak suckers for glass. You don't look in the positives to see if one is secretly really loud, because the negative reviews will tell you by complaining if it's too quiet, or too loud.
Some are better than others. Why? Because each product will have its strengths and weaknesses. This wood chipper has a great coat of paint which is impervious to scratches! But this wood chipper is much more fuel efficient! And this wood chipper is fully electric!
Those are all examples of a product's strength, not a weakness.
> If it has a "surprise strength" which the company who made it didn't notice, and didn't advertise ...
The whole point of reviews is because we can't trust the advertiser. Especially on Amazon where it's likely from a reseller who may be ignorant or straight up lie about the product.
> Want a bluetooth speaker, you find all the ones you can, then look for the negative reviews of which have poor battery life and which have weak suckers for glass.
Lol. I also care about how good they sound. I don't want to see an absence of reviews saying how bad it sounds. I want to see motivated reviews by enthusiastic users claiming how great the sound is, and then taper my expectations by checking the negative reviews to make sure someone more educated about speakers hasn't made a more in-depth analysis of the sound stage and quality of drivers, cables, etc. Using either source alone provides an incomplete picture.
You're arguing for arguing's sake. You clearly don't have a good method of making an educated purchase, so consider improving it before evangelizing it over well-established, comprehensive methods of making an educated purchase which are undeniably superior.
The idea that "negative reviews by themselves will always contain the full amount of information needed to make an informed purchase" is an axiom of online shopping is laughably preposterous.
On a related note, for binary (good/bad) reviews, we can the temper the results by assuming a beta(1,1)-distributed prior and then updating . The expectation (which can thought of as an "adjusted average rating") ends up being:
E[x] = (good + 1)/(good + bad + 2)
This adjusts for situations where the number of reviews is small, in which a good average rating can be misleading.
"If Newton's Principia were published today, it would have 4 stars on Amazon. There would be one cluster of 5 star reviews by people saying it had revolutionized their thinking, and another cluster of 1 star reviews by people complaining it was pointless and hard to read."
and then someone linking to actual reviews on Amazon (avg. rating 4.1).
There are so many books available to me that I do use Goodreads ratings as one heuristic for choosing what to read. It would take a lot for me to read a 2 star book (like a very strong recommendation from someone I trust), not because I am sure it wouldn't be worth my time, but because there are plenty of more highly rated books that are more likely to be worth it.
The author even qualifies the rule: "Unfortunately, Goodreads ratings are often not unbiased.."
... under the assumption that everybody who rated the book shares your preconceived worldview. Assume that not to be the case on Amazon.com (or similarly large review platforms).
To give one example, the book talks about how in their 3d virtual chat game, IMVU, as a shortcut, they initially had the characters teleport around because they didn't have time to implement walking animations. They got some positive feedback from users about the teleporting "feature", and concluded teleporting was a great selling point and they shouldn't implement walking. The book then starts teaching what lessons you should take from this story.
However, around the same time they were making IMVU, Linden Labs made Second Life which did have walking animations and I believe was more successful than IMVU.
It was a common pattern in The Lean Startup to point to a single example that the author thinks worked out well for IMVU and extrapolate advice from it while ignoring any counter-examples.
But the other part was that it was just poor science, relying almost entirely on anecdotes for its claims. Which is not to say his claims are necessarily wrong, just that for any claim that isn't obviously true, there's no convincing data to back it up. There's usually just one success story where it seemed to work, and even within the story, it's hard to know if the strategy was actually successful. Like, as I mentioned before, the book passes off learning the users preferred teleporting to walking in IMVU as a success story, but it may have in reality been the inferior choice. We can't know for sure, but we do at least know other more successful virtual worlds have walking. IMVU never even tested walking. It undermines his credibility when he seems oblivious to his own possible failures. It's a lot of, "this is what we did, and I think it worked out well". Despite advocating split testing, he never split tested the techniques he advocated. There's no control - no baseline to compare to.
Or when he prefaces the chapter on small batches with a third-hand story of a father and two daughters who had to address, stuff, and seal a stack of envelopes. The daughters, aged 6 and 9, felt it would be faster to address them all first, then stuff them all, then seal them all. The father thought it would be faster to do them one at a time. So they each took half the envelopes and the father won.
I enjoy stories, but a 3rd hand anecdote about a father stuffing envelopes faster than two children is just... why? That's not going to convince anyone. He then calls back to this example several times when explaining how to apply this at a software startup (release frequent small updates, use continuous deployment). But even if one-at-a-time envelope stuffing is faster, and the startup advice is right, one does not imply the other. Just cite an actual study, or do an analysis of what was successful at other companies.
If the author didn't care enough to generate an index, what else did they not care enough about?
A surprising number of books, especially "pop sci" books, don't include an index.
With e-books though, why do you even need an index when you can just search?
Fiction, of course, rarely gets one.
I don't get how this is some big black/white issue. You don't have to finish every book you start, nor do you only have to read a single book at a time, and you can listen to audiobooks at high speeds if time is your main concern. Because in the time it takes you to "research" whether a book is worth your time or not, you could've already read a couple chapters of it and gotten enough of a broad overview of what it contains to decide for yourself.
I've managed to read about 60-90 books a year for the last 6 years, and that's only counting the ones I actually finish. Yet I'm sure the tally of books I've only started/skimmed would be significantly higher than that if I bothered to keep track of it.
You aren't going to preview / skim literally millions of books which exist to find the ones worth reading. You can't.
I've managed to read about 60-90 books a year for the last 6 years
That's, what, 8 hours worth of book publishing in the USA, in 6 years?
> in the time it takes you to "research" whether a book is worth your time or not, you could've already read a couple chapters of it
Meaning that there would be no actual trade-off in the time invested for you, to take a slightly less superficial approach when evaluating books.
To compare the approach I proposed to a literally impossible task of evaluating every single book ever written (as if anybody would actually even be interested in reading all that) is quite disingenuous. This entire HN thread and article are already excluding most books in existence anyway, by the simple fact that everyone here has been mostly talking about "english non-fiction" books specifically. They may still tally up to a large number, but there's no point in exaggerating their quantity when we're all still just as hopeless in trying to read them all anyway. All you did by pointing out this impossibility is just needlessly restate the obvious, and has nothing to do with what I said.
I'm already more than satisfied with the amount of books I read, and get a lot out of them without experiencing any existential anguish over it, so I don't understand why you're trying to throw my reading habits back at me as if they're "not good enough" or something all of a sudden. Or is that what reading is about these days? a mere measuring contest? Am I supposed to feel ashamed I didn't meet some internet stranger's arbitrary criteria for a habit that's supposed to benefit me? I proposed a viable alternative for evaluating books that works more efficiently for me than the one presented in the article, in case others are unsatisfied with their current reading habits. I didn't claim it was something that was going to win you the "reading olympics". If anything, I clearly supported the opposite by emphasizing that people should feel less obligated to finish the books they start, because feeling like you have to finish them is just going to cause needless anxiety about the way you've invested your time.
Besides, who cares if your entire lifetime of reading could've been published in a day? Is your goal in life to out-pace authors and publishers, or is it to get fulfillment out of books? Because in my experience, reading a single chapter out of a bad/mediocre book is a lot more fulfilling than reading a bunch of amazon/goodreads reviews.
If you've read enough on the other hand, then sure, it might work. Might in the sense that with some luck and taste, you'll find the good books. But it's still reliant on taste, which is very, very biased. Anything that looks uninteresting, but is awesome will be missed.
Quick but effective heuristic to gauge how serious the book is, and how well the author knows his field. If there isn't one, don't bother buying it. If it's a 20 page list of citations from reputable journals (or original sources, if you're reading a history book), then the author doesn't have to be a professor to be believable.
1. Aim for topics I was already interested in before I knew the book existed. There is something to be said for books so good they draw attention to the topic, but that also increases the odds of the book's rating being based on popularity and not quality. I'd rather read a good book about niche topic that is particularly meaningful to me than a better book that just happens to hit the zeitgeist. In other words, I pick a topic and hunt for books.
My most recent non-fiction books were Derek Wu's book on Spelunky, Chapman Piloting & Seamanship, and Thinking with Type. None of those are going to make Oprah's Book Club, but all were very enchriching for me because they aligned with areas that matter to me.
2. Aim for books that are "canonical" according to people in the field. Signals for this are lots of reviews, especially many reviews over a period of years because that shows consistent relevance. When people I respect that I share interests with mention a book, that's a strong signal. When reviews of other similar books mention as a point of reference, that's a strong signal.
3. Read a few pages and judge the quality of writing. I have read very very few books where the underlying concepts were valuable enough to be worth wading through bad writing. On the contrary, my experience is that deep, clear thinkers produce quality at all levels of exposition. They wrap their good ideas in good chapters with good paragraphs full of good sentences. Life is too short for shitty prose.
I also have a simple rule for how to increase the quantity of non-fiction I read: Put it in the bathroom and don't bring my phone in there. You'd be surprised how much text you can get through one poop at a time. This is great for books where reading it is not super engaging but I want to have read it.
I think it's a combination of "contemporary books will be from the swirl of life around you that already know, old books will be from a different culture and time and that difference is important" and "knowledge lost, from people who are no longer living".
It would also have the test of time - Darwinian natural selection of information. Something we're missing when we want the internet to keep data forever, we should be letting data rot and be lost, taking ongoing action to keep it in the present is a vote for its importance, setting up a system which preserves information without effort is cheating the system and leaving us swimming in e-waste.
If I'm looking for a book I want to learn from, I scout recommendations from journals, magazines, blogs, respected radio programs/podcasts. End of year "best of" lists are also a good source of ideas. Then I read some in-depth reviews of books that strike my fancy by reviewers who have reason to know what they're talking about. Many of these have 4+ star ratings in goodreads, some don't.
Also, a blanket rejection of books by journalists or other non-experts is going to lead you to miss some really good books. That rule immediately called to mind Tracy Kidder. So, "The Soul of a New Machine" is off limits. Really? No thanks. I can think of many others.
Also agree that Goodreads reviews require a healthy dose of caveat lector.
It takes some getting used to and requires more focus but it's very efficient in terms of information dump. I've been able to increase the speed over time. My wife thinks I'm crazy. I use headphones.
The air plane is easiest for 2* speed because I can close my eyes which helps me focus on the sound even more.
I use Audible though.
Moreover, 3000 books is still only scratching the surface of all non-fiction ever published in English (let alone other languages), so strong heuristics will still be needed.
1) Find authors and intellectual subcultures that you are interested in, and follow what they are doing.
2) Read intelligent long-form reviews of books. Most good intellectual magazines and journals have a review section.
3) Read whatever you can on Amazon preview and Google Books before buying a book.
Obviously, YMMV. This can lead to issues (for example, ideological hegemony in a lot of the fields I'm researching in) and it's by no means the only method I use to pick works. But it has been really helpful for me!
I read comments only after I'm done with the book.
Barbarians at the Gate is a good example of this - it required authors who were able to dig deep into _what_ happened. There's obviously some info on why things happened there as well, but the primary purpose of the book is to inform the reader of what occurred, which is a good use of the journalistic skill set.
The Empathy Exams by Leslie Jamison only has a 3.63 and I don't think she's really an "expert" in any of the specific topics she writes about. It's brilliant ...but maybe I can still return it?
This system is a great way to hyper-optimize for narrow-mindedness.
Some books I've read don't really follow the spirit of "Very Short Introduction" and are rather dense. The quality of writing also varies quite a bit.
Overall though, they are a good first stop if you want to get basic familiarity with a topic.
I use a heuristic that the best book about a given topic was most likely written half as long ago as people have been thinking about that topic. People write more books now, but they get less interested in old topics as time goes by, and those effects tend to cancel out.
I also use the 6.5 IMDB rule for movies and the 8.0 IMDB rule for TV shows.
Charles Mann's 1491 and 1493 remain the best books on the topic.
Robert Whitaker's Mad in America and Anatomy of an Epidemic likewise.
You don't have time for everything, so it's good to have rules. But always remember that on occasion you need to break your own rules otherwise your knowledge becomes biased.
My extra rule to overcome this bias is. Anyone who recommends me their favorite book I will read it no matter how stupid I believe the recommendation is, no matter how many rules it violates.
I found the DaVinci code this way. That book was a huge mistake, but still I now have actually read the book and have the definitive knowledge to know it's a mistake.
Pretty much what this article does :)
> The best nonfiction books I have read have invariably been by folks who spent their lives researching that particular issue. A couple of books in this category immediately come to mind: Why We Sleep, The Language Instinct, Gödel Escher Bach.
Hofstadter was only 34 when GEB was published
Roaming in his 1956 Mercury, Hofstadter thought he had found the answer—that it lived, of all places, in the kernel of a mathematical proof. In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” He sat down one afternoon to sketch his thinking in a letter to a friend. But after 30 handwritten pages, he decided not to send it; instead he’d let the ideas germinate a while. Seven years later, they had not so much germinated as metastasized into a 2.9‑pound, 777-page book called Gödel, Escher, Bach: An Eternal Golden Braid, which would earn for Hofstadter—only 35 years old, and a first-time author—the 1980 Pulitzer Prize for general nonfiction."
You seem to be trying to argue that -- because he continued to live afterward, he somehow didn't spend his entire life on the book. But that's not what was being asserted -- only that he had spent "his life" up to the time of the work's publication on the work. And it's not even relevant if it were meant as you seem to be assuming, because Hofstadter has continued to study the very same subject ever since. He's also published more books on the very same matter in the following years. It was his life's work then, and it continues to be today.
First, a nitpick - the book was published when he was 34, and he was even younger when he wrote it.
But the main point is that the post is clearly talking about people who have spent the entirety of a lifetime studying an area, not someone who has spent their adulthood so far studying the subject.
And Hofstadter clearly hadn't devoted the entirety of his adulthood up to age 34 studying the subjects of GEB. The quoted passage says that his formal study had been in particle physics.
Moreover, Hofstadter's study of physics isn't unrelated to his study of intelligence. I personally got a degree in physics to study language and intelligence, because the way physicists use language, analogy, and simple concepts to understand the world is particularly effective and interesting. So I can tell you first-hand that they are related. In fact, anyone who studies philosophy is probably making a serious mistake to not study physics first.
The post is very clear that it's talking about expertise in the sense referred to in my comments (which, as also indicated in my comments, I don't fully agree with):
"Rule #1: Prefer books by experts in the field
The best nonfiction books I have read have invariably been by folks who spent their lives researching that particular issue. A couple of books in this category immediately come to mind: Why We Sleep, The Language Instinct, Gödel Escher Bach.
Positive indicators of this in a blurb may include “Professor in [field directly related to the book’s topic]”, “Long-time researcher in [field directly related to the book’s topic]”.
Note how they say "Professor in" and "Long-time researcher in".
The way you're using a term, a 21 year old can have "spent their life researching the topic" if they've been focused on it over the previous three years.
Albert Einstein was 21 when he published his first paper, and 26 when he published his best. Please tell me that he didn't spend his life researching physics, even then.