John Carreyrou wasn't an expert on phlebotomy or biochemistry. Roger Lowenstein wasn't an expert on hedge funds or technical finance. Eric Schlosser on nuclear command and control? Nope. Woodward and Bernstein and constitutional law? Not so much!
I don't think these are very good rules.
I thought it was especially amusing that this person doesn't trust journalists to write about anything but journalism. If you believe that, why are you interested in reading about journalism in the first place?
HN has an unfortunate fixation on Michael Crichton's supposed Gell-Mann amnesia effect (wherein you read something in the news that pertains to your field, spot errors, get angry, and then forget that happened when you go on to read the next story). I propose the countervailing Djikstra amnesia effect, wherein a technical professional produces workmanlike output with all the attendant errors and omissions that attach to any work produced by humans (Christ knows our field is intimately acquainted with errors and omissions), then forgets they did that, and expects every other professional to measure up to the standard they themselves failed to meet.
I don't know any of the authors you've mentioned, but in my own experience, not reading books written by journalists is a good rule of thumb. That doesn't mean, obviously, that it's impossible for a journalist to write a good book. By profession, journalism is both shallow and narrow: it focuses on the surface of events in the given moment. That tends not to equip someone with the knowledge or expertise to write a truly insightful book.
A counterpoint to your point on shallowness and narrowness is Andrew Solomon's "Far from the Tree: Parents, Children and the Search for Identity". Journalists, ostensibly, are good at writing, interviewing, and exploring wide-ranging societal struggles. The archetype of journalist has unfortunately fallen into disrepute if they're no longer trusted as relayers of information. On reflection it may be worth distinguishing the 'columnist' archetype from the 'reporter' and 'investigator'. The latter two I value greatly. Although I also value reflection and summation. No mainstream audience wants to read a white-paper on recent Syrian conflicts. For this we need good relayers of information.
> That tends not to equip someone with the knowledge or expertise to write a truly insightful book.
Or perhaps their fallibility in the field and readiness to acknowledge their shortcomings in it promotes lots of cooperation with and review by experts. You note narrowness, but what's more narrow than expecting a single expert in one or more fields to have a comprehensive view about anything, even their own subject much of the time? I can't think of a single person in computer science that I would trust to know everything about that topic, so why would I trust that person to cover it well over someone that has spent a career coordinating information from different people and attempting to distill it to an audience that may not be familiar with it?
And to be clear, I don't see computer science as special in this regard. I think most things interesting enough to cover are probably complex enough that multiple people are likely required to get a full picture of it.
It seems like a rule of thumb that’s served you poorly, given it’s left you unfamiliar with some of the most widely acclaimed recent (and less recent) non-fiction work.
I presume you're referring to the authors the OP listed. It's certainly true that I don't tend to read a huge amount of popular nonfiction, and that I'm young enough to have a generationally limited knowledge of what's out there. But, at the same time, what the best nonfiction books of recent decades have been is an open, and very subjective question. I have my own peculiar interests and proclivities. I think, from a quick Google, that all the books the OP listed fall outside of that space.
It is fine to not be interested in things; there's lots I'm not interested in either. It's less great to have strong, sweeping opinions about things you're basically incurious about.
See where I wrote 'in my own experience'. The reasoning I gave, in any case, holds entirely independently of how encompassing my reading is. And to be quite honest, with the caveats above, I would hazard that I read far more nonfiction than most, i.e. I am not speaking from a position of unusual ignorance, as you seem to be implying.
I have no idea who you are or what expertise you have; I'm responding to your own statement that you're unaware of these journalist-authors because you don't read much nonfiction.
You said, if you are incurious about something, you ought not to make sweeping statements about it. I was challenging the premise, i.e. that I'm incurious about non-fiction books. I don't, by that way, think the claim holds up. It's perfectly possible to be incurious about something and to make a sensible general statements about it (e.g. 'all ducks have wings'). Hence why I told you to engage with my reasoning.
I never said I don't read much non-fiction. I spend nearly all my time reading non-fiction. I said I don't spend 'huge' amounts of time reading popular non-fiction.
You can't really logic-puzzle your way out of 'if you are unfamiliar with books written by journalists, you're not in a good position to make worthwhile pronouncements on books written by journalists'. It's not even an interesting thing to try and dispute.
_When Genius Failed_ was written 18 years ago, by a journalist, and predicted many technical details of the 2008 crisis. It's also an excellent book, so if this is news to you, let me also recommend it unreservedly as a good read.
Agreed. That said, to give some credit, here are a few I’ve read during the last few years that I can offer as counter examples:
* All the shah’s men, by Stephen Kinzer, about how the 1953 coup against the democratically elected prime minister of Iran Mohammad Mosaddegh was orchestrated.
* The Idea Factory, by John Gertnee, about the history of Bell Labs
* A mind at play, by Jimmy Soni and Rob Goodman, about Claude Shannon
* Ike’s bluff, by Evan Thomas, about Dwight Eisenhower
* The Wise Men, by Walter Isacsson and Evan Thomas, about diplomats during the Truman administration
I tend to be more skeptical about books written by journalists that relate to some specific topic rather than historical narrative. Additionally, I tend to read the historical narratives with more skepticism than I do when I read history books written by professional historians.
I’m honestly not sure what there is to disagree with. There has been no quantitative evidence supporting it. “Things should cost based on labor” is a nice thought for laborers, but it’s completely unrealistic. Nobody is going to pay twice as much for something because one manufacturer used a really shitty process that took twice as many man hours.
There’s a reason the “criticisms” section of the wiki is one of the largest and links to a full article on it. It’s a religion, not an evidence-driven theory. Prescriptive, not descriptive.
I suppose we would agree to disagree if you believe in ideas not grounded in reality (a.k.a. backed by empirical evidence).
Is it a total coincidence that you bring this up responding to a user with the name Emma_Goldman?
Anyway, I definitely second your point. I wonder if “Capital in the 21st Century” will be relevant for the entire century (although doubtlessly won’t be as consequential as Marx’s series).
Hah, no, seeing the username is a factor in pushing me to make the comment. I feel as though many books tend to be far less wide ranging, because they take up a smaller view; Piketty's book (going from what I'm seeing on the descriptions, I haven't read it yet) seems to be limited in scope, and less polemical. But those aren't matters of its relevance. So I might venture to say that if we will see a book as consequential as Marx's, its author will see with revolutionary (forgive the pun) clarity right to the most abstract core of their object. In the modern era, there are only three widely regarded as having done so with force: Marx, Nietzche and Freud. I'm still waiting to see what the post-modern era brings out. Who will do to economics today what Marx did to political economy and sociology in the 19th century?
That's a good point, but it's worth noting that Marx wrote Capital with the help of Engel's patronage. Journalism was not his livelihood, ultimately. And it was before the wide-ranging, academic professionalisation of the twentieth century. Had Karl Marx been born today, he would almost certainly have become an academic.
Journalism was literally his only livelihood. He was a journalist/editor years before he even conceived of the theories that spawned Capital, and he did it for the rest of his life.
His first newspaper was actually shut down precisely because of his journalism on the question of the rights of poor to gather wood in forests owned by the crown and nobles. He went on to write for at least 3 other papers/journals that I know of, one of which was american and his work was directly on the American Civil war
If you read Gareth Stedman Jones' biography of Marx, it's very clear that there was a persisting shortfall between whatever income Marx acquired via journalism - which was, in any case, fluctuating - and the outgoings of his family. He depended upon regular support from Engels, among others.
oh for sure. I've read a couple of collections of his correspondence, specifically with Engels and its replete with pleas for money. That being said, his only real job, his only job, from university til his death was journalism, and he did quite a bit of it and absolutely relied on it for income along with Engels (and others) donations. Like, journalism makes up close to 3(?) volumes of his collected works.
John Carreyrou (ok, I haven't read his recent book, only heard a lot about it) didn't write about phlebotomy or biochemistry; he writes about corporate scandals, something he has been involved with since he first shared the Pulitzer Prize in 2003. Likewise, Woodward and Bernstein are the experts on Watergate. Roger Lowenstein also writes about corporate scandals in When Genius Failed---that may be all you need to know about hedge funds, but it is kind of one-sided.
How about another example: Bill Bryson. He's a brilliant writer and I really like all of his books. Except for A Short History of Nearly Everything. That's mostly a regurgitation of other popular science writers; it is a good read, but if you are fond of the genre, you'll spot his sources and it kind of ruins the effect.
Compare those with J.E. Gordon on structures and materials, John Clark's Ignition (which is a collection of amusing anecdotes and settling scores, I admit; the closest in my field is M.A. Padlipsky), Peter Ward (Gorgon), or Mark McMenamin (The Garden of Ediacara, some thumbs up).
If you want to know the story behind some event, journalists are admirably well suited to tell it. If you want to know about some field, they really aren't. In a world full of books, reading a journalist writing about phlebotomy or biochemistry is probably not all that useful.
I'm definitely in agreement about your Dijkstra amnesia effect.
I think you might be selling Carreyrou short. The "corporate scandals" you're saying he wrote about are, in many cases, scandals because he broke the story about them. There might still be a Theranos as a going concern if it weren't for him. He's not like Andrew Ross Sorkin writing about the collapse of investment banks in 2008 --- his reporting had a causal impact on the collapse he wrote about.
Read the book! It is a excellent read. There are technical details in it a-plenty.
_When Genius Failed_ isn't simply a description of a scandal, but also a pop-grade exploration of the technical factors that led to the scandal. The book is impossible to write without conversance with its technical subject matter. Which is my point: Lowenstein wasn't a quant, but he was able to accurately and effectively write on them.
I cannot speak on Carreyrou, but parts of _When Genius Failed_ make me think that Lowenstein did not have a good grasp of the subject he was writing about.
For example, when discussing LTCM volatility strategies:
>The stock market, for instance, typically varies by about 15 percent to 20 percent a year. Now and then, the market might be more volatile, but it will always revert to form—or so the mathematicians in Greenwich believed. It was guided by the unseen law of large numbers, which assured the world of a normal distribution of brown cows and spotted cows and quiet trading days and market crashes. For Long-Term’s professors, with their supreme faith in markets, this was written in stone. It flowed from their Mertonian view of markets as efficient machines that spit out new prices with all the random logic of heat molecules dispersing through a cloud.
From this quote it looks like Lowenstein believed that
1. Volatility is not mean reverting
2. The reason people at LTCM believed volatility was mean reverting was due to complex mathematical models rooted in market efficiency.
Now, volatility mean reversion is something that can be easily seen by looking at a graph of volatility over time [1].
Also, Lowenstein does not do a good job at describing the way LTCM modeled risk, and the way it failed. In fact, he barely even tries. He just hints here and there about "correlations going to one", but nothing more.
For example, which assumptions failed? Did they assume that different kind of bets (relative value between bonds, merger arbitrage, arbitrage between double-listed equities, etc) were uncorrelated?
Did they get hurt more by different kind of bets being correlated, or by some bets going particularly wrong? Did they stress test their risk measures in any way?
Ask quants if they feel that it’s a balanced view of quants though. It might seem accurate to you, but you’re not a quant now, nor were you one during the era of LTCM.
You are in no position to judge how accurately or effectively it was written (unless you judge effectiveness as entertaining you).
We are in a period of transition. Moving from info scarcity to info overload didn't come with a migration booklet.
It doesn't help that everyone's attention is being hacked simultaneously. Nor does it help that the internet allows you to reach everyone on the planet, whether you are worthy of that reach or whether they are worthy of your words.
All I know is in such an environment in flux pointing out hypocrisy is a waste of time. Let it play out. Give it another 10 years.
> John Carreyrou wasn't an expert on phlebotomy or biochemistry. Roger Lowenstein wasn't an expert on hedge funds or technical finance. Eric Schlosser on nuclear command and control? Nope. Woodward and Bernstein and constitutional law? Not so much!
So, your argument against a rule of thumb - a broad, sweeping heuristic never meant to account for all possibilities, and by definition not to account for outliers - is that outliers exist?
Well, I readily grant you: every rule of thumb, ever, has failed miserably at accounting for outliers.
Seeing as how the post you're addressing was written how to triage the endless flux of books given finite time, I rather think their point was, you know, how best to handle the general case - not the outliers.
Rather like, "I will never read fiction whose cover is a sexy lady in all leather holding a knife and leaning on a motorcycle." It's not only likely, but certain, that that will end up excluding really good fiction. But, playing odds on where to devote my time, it's a better bet to avoid such books than not, given my reading preferences.
And I'm very comfortable with the rule that "non-experts attempting to digest and relate complex topics will tend to (a) not be experts, and thus (b) misunderstand, and (c) relate that misunderstanding to their readers, on average, and certainly are far more likely to than relevant experts." E.g., Malcolm Gladwell, who has an excellent reputation for his writing, just so long as you're not familiar with any of the things he writes about. Or 99% of pop-sci books ever written on the topic of quantum physics.
There are good rules of thumb (don't check malloc return values for NULL, just blow up when malloc fails) and bad rules of thumb (no function should be longer than 60 lines).
"Don't read books by journalists that aren't about journalism" is a bad rule of thumb.
I like the idea of the Dijkstra amnesia effect. I feel like it’s due time to birth a system like what Ted Nelson had in mind using RDF or something more novel.
Ever since YC had an RFS about replacing Wikipedia I’ve thought about how it needs to change. There are many shortcomings including the fact that it is hard to edit for a number of reasons, but most importantly going to a Wikipedia page at one point in time only captures that information which may be right at the time or wildly off or as you point out full of omissions. Information shouldn’t be deleted but instead we should see meta information like how other experts weigh in. If someone points to the Wait But Why article on AI it will be backed by the author and Elon Musk (who are both dilettantes), and maybe we can see some experts gave it a low rating.
Part of the problem I see is that often so called experts are just those who are rabidly vocal while the actual experts are heads down becoming better experts. So while we might see hand waving about killer AI from several “experts” we only have a few people who occasionally interject like Rodney Brooks. (This is a bit why I’m not very optimistic about several of these startups trying to show consumers “the truth;” indeed, the Dijkstra amnesia effect.)
So, imagine if even professionals/experts in their fields are prone to mistakes, then what can we expect from people who know little about a foreign subject?
Then there are things like economics where even the experts in their field can’t agree on either theory or in data interpretation. If they can’t reach consensus being informed and all, imagine someone in a different field like journalism trying to write expertly.
What book reporting on economics that wasn't written or co-written by an economist are you concerned is too influential or widely read?
As we can obviously see, even professionals in medical sciences were wrong about Theranos. How did John Carreyrou, a WSJ reporter, do such a good job reporting on them?
Flash boys was pretty much one sided incorrect views about how HFTs work designed as an advertisement for a new exchange.
It was highly read and picked up by armchair experts everywhere leading to lots of outrage against... not much in particular other than evil Wall Street people stealing muh money.
The Big Short is in a similar category although it wasn’t really an ad so much as an excuse for everyone who overleveraged on their mortgage to pat themselves on the back and shirk the responsibility of financial ignorance onto “evil Wall Street people”. The movie (while entertaining) was even worse.
>As we can obviously see, even professionals in medical sciences were wrong about Theranos. How did John Carreyrou, a WSJ reporter, do such a good job reporting on them?
Simple, medical professionals aren’t equipped to sniff out business fraud signals like investigators of businesses. Given that Theranos didn’t have technology, it didn’t take any more medical expertise to explain that then it took medical expertise to explain Bernie Madoff.
To do journalism well, I think you need insider help: a whistleblower without ulterior motive, corroborating evidence and subject matter experts who can confirm suspicions. So, I think there are ways for journalists to marshall resources and then do their part which is to write a coherent story based on credible evidence and opinion, but much of what comes from journalism is hurried and often put together after having formed a conclusion and finding data supporting the conclusion and omitting data which would weaken the conclusion.
I’m implying this journalist had the luck of right resources at the right time for the right subject and proceeded in a manner to seek the truth rather than to simply want to bring Theranos down.
W/re economics. We have the likes of Fukuyama and Krugman who have influence on Americans with regards to economics but who are often taken as oracles, rather than accepting they are economics thinkers who can be wrong.
Krugman won a Nobel prize in economics. Like him or not, you can't use him as an example of a "journalist" writing about a technical subject with which they have no expertise.
My argument isn't "all books are good". That would be a stupid argument. My argument is that these rules suck.
These are heuristics. They will of course produce false negatives. Given the huge volume of available books relative to available reading time in a human life, the most important thing is to minimize false positives.
Whether the particular books you refer to are false negatives on these heuristics is of course a matter of opinion. I don't know Woodward and Bernstein's book on constitutional law; their most famous book, All the President's Men, would surely fall under a similar exception to biography/memoir, since it is an account of their own investigation.
This is the same funny circular argument the author of these "rules" made. If you don't believe in the value of the reporting Woodward and Bernstein did, why would you believe there's value in their account of how they did it?
I think you're kind of cherry-picking examples here. My gloss of the OP's point was "be skeptical of books by journalists on technically complex subjects in which they have no direct expertise". e.g. it's common for journalists who have never worked in a scientific field to write popular science books.
A book about investigative journalism by investigative journalists is rather different.
How is blood lab technology not a technically complex subject? Are you saying Carreyrou had direct expertise with it, or that _Bad Blood_ is an untrustworthy account?
It does indeed. The question you are asking leads me to believe you haven't read it. As a favor to the thread I will make this plain: drop everything else you're reading long-form and buy a copy of _Bad Blood_; it is fantastic. I didn't give a shit about Theranos. Until I got a couple pages into Carreyrou's book, that is.
This method does seem limited. I have read some of the best nonfiction books from Journalists over the last few years: Vance's biography of Musk, Stone's biography of Bezos and the books from Issacson including Job's biography. Author's missing all these NF books due to his disdain for the journalists.
This thread, I think, was Musk complaining about a report that he let go an assistant who asked for a raise. He did that, he says so in the thread. He just doesn't like the way it is sometimes portrayed.
Did you read the post? He specifically calls out biographies as the exception to the 'no journalists' rule and describes why journalists are well suited to write bios.
In other words there's no reason to believe the author would miss any of the books you mentioned by his "journalist disdain".
Fair enough! thanks for keeping me honest. Unfortunately edit window has passed so can't fix as suggested in the guidelines:future readers please ignore the question and start at sentence 2.
Yeah, I agree; I wrote that comment very quickly and needed something better than "whatever you want to call the expertise required to understand the issues involved in Watergate and in reporting Watergate". I brought it up because I think it's pretty funny to have a "no journalists, except when they're writing about journalism" rule. Nobody was interested in _All The President's Men_ because they wanted a Jack-Shafer-esque inquiry into the media and the practice of reporting; what made the book influential is the reporting Woodward and Bernstein performed.
If you want a replacement example that isn't make that point, substitute in Lawrence Wright and either scientology or terrorism.
Making an error in my field doesn’t fundamentally misrepresent another field to the general public though.
So yes, mistakes are expected initially. But the job of the author is to get enough outside eyes to fix it. Historians go to great lengths to ensure they aren’t just repeating a single view of an event. Most authors do not seem to make any similar effort.
There's a reason journalism is called "the first draft of history", and not the final draft.
Also, the idea that the errors software developers make don't have a broad impact on society is a bit of an eyebrow-raiser for me, but that might be a function of the subfield of software that I happen to work in.
>Also, the idea that the errors software developers make don't have a broad impact on society is a bit of an eyebrow-raiser for me
It’s also an eyebrow raiser to me, given that’s not even close to what I said. “Having an impact != misrepresent”.
I suspect I’ll be a long time waiting for you to make any correction to your comment that misrepresented what I said though, which is funny given the conversation.
I am an author of several nonfiction (pop-sci) books. I am also a former journalist.
I readily acknowledge that I am not an expert in the fields I write about.
That is why the bulk of my work is finding out who the experts are and presenting their work in an accessible way. (Granted, once you've been writing about a certain area for long enough, you do tend to become educated about it.)
One of the things that makes me feel good about what I do is that I am able to expose readers to intriguing information they might otherwise never encounter, unless they've got subscriptions to a bunch of academic journals in fields of study outside of their own.
Keep in mind, it's not a given that people who are the most well-respected experts in their field are also talented writers. People like Douglas Hofstadter certainly are both, but not every expert is like him. And if you follow Rule #1 religiously, it sounds as if you're limiting yourself to experts who are both. In the process, you're likely missing out on a lot of cool information, just because you are only willing to read first-hand accounts.
"When I'm in a book store nowadays, or when I'm at a conference looking at a table full of new books, I try to gauge how good the books are by picking them up and -- yes, I guess it's okay to reveal the secret -- I turn to page 316. I try to read that page rather carefully, and this gives me an impression of the whole book. (Often the book is too short; then I use page 100. Authors, take note if you expect me to buy anything you've written.) This system works quite well, I think."
-- Donald Knuth in "Things a Computer Scientist Rarely Talks About"
Here's what I do. If I get a book recommendation, I immediately buy it and put it on a bookshelf. Every time I walk by, I scan the shelf and pick what speaks to me at that given time: fiction, non-fiction, cookbooks, science, history, classics, short-story collections, biographies, etc. If you match your what you read to your mood and frame of mind, you can consume and retain information much more quickly and enjoyably. And having the book on hand is really important since my interests and mood change from day to day.
I also try follow some other general rules, that work for me:
* Read several books at once, esp. across disciplines.
* Read paper books.
* You don't need to finish books. Stopping mid-way is fine (still have problems with this!)
* Write in books and make notes. Write up notes a couple of weeks after finishing (create your own commonplace book)
* Avoid audiobooks (if you want to retain the content). I just can't retain when I listen while driving/multitasking, but like listening to fiction for fun.
* Tag interesting books/papers cited in the books you like. Look them up and read them too.
* Find interesting/prolific readers on Goodreads. Lookup the books they read, esp. the ones you've never heard of.
* Let other people know that you like reading, and ask what they've read recently. When they read interesting books, they'll recommend them to you.
One problem with buying books when they're recommended is that it's a lot easier to buy books than read them. I recently went through my shelves and counted the books I owned but have not read and was shocked.
So my current method is now to read the books that past-me thought sounded interesting.
Yeah that's true. And having a small apartment makes that problematic as well.
I like this Umberto Eco anecdote: https://fs.blog/2013/06/the-antilibrary/. Having a lot of unread books around is a good reminder of how much there is to read, learn, and experience. And it makes reading instead of turning to Netflix an easier decision.
> If I get a book recommendation, I immediately buy it and put it on a bookshelf.
That's fine if you can afford it ;-) Living on a student's budget, I limit myself to two new books a month, or sometimes three. But then I make sure I pick good books, and really try my very best to finish them. Doesn't always work, but it does mean that I've completely read ~90% of the books on my shelves. (Although I will often read books in parallel, switching as the mood strikes.)
I started my "two books a month" habit about three years ago and have found it a valuable habit to have. I've read some excellent books, learnt a ton (from widely different fields) - and there is a certain joy of anticipation in carefully selecting "this month's books".
> You don't need to finish books. Stopping mid-way is fine (still have problems with this!)
I think this is important with non-fiction. A lot of books can be wrapped up nicely in 80 pages, but the publisher wants 300. So they get a lot of unneeded padding at the end.
Somewhat similar to what I follow. Although these days I don't have a particular preference to paper books vs ebooks. And probably durable vs bestseller may be at the top of the list.
> interesting/prolific readers on Goodreads
Could you share a few?
Like the article's #3, I have my own silly rule, but it's a good one.
No books with the author's name written in a larger font than the title.
This eliminates books who's main merit is the author's fame. It's especially good at filtering out crappy New York Times best sellers and it works for fiction too.
100% agree when dealing with non-fiction. It shows a lack of respect for the book as an art form or information resource. It shows the author is writing for exposure and money and not to produce something of true value.
> it works for fiction too.
Nonfiction authors usually write on a small handful of subjects, and chances are you're looking for a book on a particular subject and will compile a list of contenders, then do some light research on each author to gauge their authority on a subject.
But with fiction, you aren't likely to know the general contents of a book before you read it. You may be looking for a particular genre, but not a particular story. If someone like Stephen King or Isaac Asimov have proven that they are capable writers within their genres, it's actually beneficial sales-wise for these names to stand out in a book rack. If I see a bunch of books on a rack and one of them says Asimov then I'm honing in on that book first.
It doesn't work for fiction for the simple reason a reprint is as good as the original, and a publisher reprinting Known Author's Work From Before Their Fame is going to do everything they can to make sure you know this unknown book is by Known Author. It isn't the book's fault, and it isn't the author's fault, and it in no way diminishes either.
In recent years I've switched a lot to watching a non-fiction author's 30 minute talk on youtube about their book rather than reading the book entirely.
This is especially true for business/self-improvement type books, where I've found its almost never really time efficient when I'm really just looking for the list of 5 things I should be doing and skipping the extraneous pages of anecdotes.
For me it's the opposite: for example, I had watched countless videos on "disruption" but did not fully grasp it until I read The Innovator's Dilemma. The same holds true for theory of constraints vs reading The Goal, Robert Cialdini's works on influence, Richard Dawkin's views etc. In all cases, I thought I understood the material before reading the original source, and in every time the books blew me away.
Perhaps there is a dimension about how rigorous the thinking is behind the book. I struggle to imagine a YouTube video that could effectively and convincingly unpack ideas from The Intelligent Investor, The Sovereign Individual, Sapiens, etc. Other topics like "How to get rich with x" or pop-sci covered by the likes of Kurzgesagt are simplistic enough for a video essay, but those are seldom worth consuming regardless of medium.
I think you're hitting at the dichotomy of knowing vs grokking. A lot of powerful concepts have high level summaries that can trick the audience into thinking that's all there is to it. Chris Voss's excellent "Never Split the Difference" comes to mind. A lot of his negotiating strategy can be boiled down to "develop a mutual and deep empathy with your negotiating partner". On some level this makes intuitive sense; people who empathize with you will help you solve your problems, and empathizing with other will help you tailor your solutions to their concerns. But when you sit down to pull this off, you run into two major problems. What is the strategy for doing this? Some phrases tend to antagonize people, while others help build rapport. There are often emotional stages that relationships have to go through, so you need to behave appropriately at each stage, but also you need to move the relationship to the point that is beneficial for you. The second major hurdle is actually implementing these strategies. Even with the head knowledge, when the pressure to perform is on, you may not behave appropriately.
Put another way: it can be easy to gain an understanding of a particular goal, but very difficult to develop a deep understanding of the systems that allow you to achieve that goal.
Yeah, I think in some ways I've also adopted a sort of bimodal approach:
* If it's a somewhat technical topic, go straight for the textbook/papers. If you don't understand them, you won't understand them any better from reading the "pop" material. If it's too complex (say, quantum physics), walk back and get acquainted with more basic material.
* If it's not technical, e.g. popular non-fiction books, listen to a (couple) podcasts. If it sounds like there's more to it than the 5 bullet points the author keeps repeating, get the book. That doesn't happen very often.
> If it's a somewhat technical topic, go straight for the textbook/papers. If you don't understand them, you won't understand them any better from reading the "pop" material. If it's too complex (say, quantum physics), walk back and get acquainted with more basic material.
OK, I've been holding this in and now I finally have an excuse to put it out there:
Trying to talk about quantum physics without math makes it more confusing, not less. You inevitably end up making some weird analogies which aren't analogous, things which an expert might be able to reverse-engineer into the actual concepts but which put a non-expert in a Lewis Carroll Bullshitland, which is like Wonderland only not as amusing and definitely not worth putting in a book, let alone a "physics" book.
At worst, you end up with crap which is actively wrong, like everything Deepak Chopra has ever said in his entire existence.
Meanwhile, you can give people a real understanding of basic quantum mechanics with high-school algebra and a bit of simple logic.
The deep reason behind this is the same damned interpretation problem physicists have failed to solve for nigh-on a century now. We have the math, we know it works, and, miracle of miracles, we can do some real physics with fairly simple mathematical models, but we don't know exactly how the math hooks up with reality. If none of Dirac, Bohm, Feynman, and Pauling could definitively solve this problem, the odds of a pop science author doing so are not worth thinking about.
To drag this back to the topic: A book about quantum physics which includes no math isn't worth reading.
Is there any specific recomendations for: "Meanwhile, you can give people a real understanding of basic quantum mechanics with high-school algebra and a bit of simple logic"?
> Is there any specific recomendations for: "Meanwhile, you can give people a real understanding of basic quantum mechanics with high-school algebra and a bit of simple logic"?
I had "Quantum Mechanics: The Theoretical Minimum" in mind, but I forgot it used some simple calculus, too. It's easy enough to bootstrap from high-school algebra to the kind of calculus it uses, but my statement wasn't correct for that book.
But I'm being unfair: The volume on quantum mechanics is the second volume, and both differentiation and integration are explicitly explained in the first, on classical mechanics.
And there's a difference between using an equation and deriving it. If you don't expect to derive equations, you can still understand quantum mechanics in terms of state vectors, matrix operators, and complex amplitudes turning into probabilities without explicitly using a Lagrangian, which does unavoidably require calculus.
(And, yes, I consider basic matrix algebra and complex numbers to be high school algebra.)
I think that defeats the purpose of learning from reading. I just finished the Learning How to Learn class on Coursera and it emphasizes the active learning by doing that needs to take place for one to internalize ideas. By watching summary videos to just get the gist of each book, I think you end up with a not so useful illusion of understanding.
It’s also a good idea to read some of the lowest-rated reviews first, to see what the main complaints about the book are, and whether they would put you off reading it.
This goes for basically anything you purchase online. Read a couple highly positive reviews, read a couple highly negative reviews, then read some somewhat positive and somewhat negative reviews. Get the entire spectrum.
Emotions or lack of consideration cause some people to leave inaccurate ratings but still give useful information in the reviews themselves. Therefore you should never trust a 5-star or 1-star rating, but you should still consider why the reviewer felt compelled to leave such a rating.
At the same time, less extreme ratings might provide a fair and comprehensive assessment but the reviewer might have overlooked a particular edge case or issue.
The positive reviews are always "I bought this for my grandson, I'm sure he will love it, 5 stars" or "I liked it". Negative reviews are a mix of "I hated it" and actionable information "can't use both sections at the same time, wobbles, and edge is too sharp"
I contend that only negative reviews have information; positive ones are propaganda you read with excitement to reinforce your emotional feeling of "I want this thing to enhance my life, I want to be part of the people experiencing this 5-star feeling, I'm dreaming of who I can be if I own this product, let me join in!", they don't tell you useful things.
If you read the negative reviews looking for dealbreakers, and think you can live with the defects described, then it might be a good enough buy. If you want to read the negative reviews with an emotional view, you can use it to reject the dream and stop wanting to buy the item or anything like it entirely, but that's not mandatory.
That depends though on what you are looking for. In movies, most of the 1/10 reviews I saw were along the lines "I hate this kind of movies and hated this one too" (then why have they watched it) or "this movie pushes some ideas I don't like so I disregard everything else and give it shit rating".
For applications, there are lot of 1 star reviews in line of "the author did not translate this into my language so 1/5" or "I used this in a completely wrong manner and I failed".
For physical goods I mostly agree. Especially on Amazon if I see an item with a very controversial distribution I assume that half of the shipped products are fakes which Amazon lets slide.
That's simply wrong. In order to make an educated purchase, you need to know the weaknesses as well as the strengths. Reading 1- or 2-star reviews is unlikely to uncover all of the strengths of a product. Comparing products solely based weaknesses is just dumb.
Uh, if you're an alien, maybe. Most people want to buy a particular thing - a wood chipper, a paintbrush, a laptop - first. They already know the strengths without reading the review - that's what brought them to these specific products and what the marketing bullet points are.
If it has a "surprise strength" which the company who made it didn't notice, and didn't advertise, that's also something which probably brought you to it by referral (like a DVD player which is region unlocked - the reason you're looking at that one, is because it was linked on a forum, and now you read the weaknesses.
You want a bluetooth speaker, you find all the ones you can, then look for the negative reviews of which have poor battery life and which have weak suckers for glass. You don't look in the positives to see if one is secretly really loud, because the negative reviews will tell you by complaining if it's too quiet, or too loud.
You're not understanding. Wood chippers don't have "strengths" compared to other objects. It's a wood chipper, you can only meaningfully compare it with other wood chippers.
Some are better than others. Why? Because each product will have its strengths and weaknesses. This wood chipper has a great coat of paint which is impervious to scratches! But this wood chipper is much more fuel efficient! And this wood chipper is fully electric!
Those are all examples of a product's strength, not a weakness.
> If it has a "surprise strength" which the company who made it didn't notice, and didn't advertise ...
The whole point of reviews is because we can't trust the advertiser. Especially on Amazon where it's likely from a reseller who may be ignorant or straight up lie about the product.
> Want a bluetooth speaker, you find all the ones you can, then look for the negative reviews of which have poor battery life and which have weak suckers for glass.
Lol. I also care about how good they sound. I don't want to see an absence of reviews saying how bad it sounds. I want to see motivated reviews by enthusiastic users claiming how great the sound is, and then taper my expectations by checking the negative reviews to make sure someone more educated about speakers hasn't made a more in-depth analysis of the sound stage and quality of drivers, cables, etc. Using either source alone provides an incomplete picture.
You're arguing for arguing's sake. You clearly don't have a good method of making an educated purchase, so consider improving it before evangelizing it over well-established, comprehensive methods of making an educated purchase which are undeniably superior.
The idea that "negative reviews by themselves will always contain the full amount of information needed to make an informed purchase" is an axiom of online shopping is laughably preposterous.
The idea that only 4+ star rating books are worth reading is laughable. Reading only very popular books is a recipe to read only books that fit into your own preconceived worldview.
It's always worth checking the distribution. A 3 star rating with a normal-ish curve is very different than a 3 star rating with a lot of 5s and 1s -- and in that case the reviews will tell you clearly whose preconceived worldview the content fits.
That's a good point -- I wish we could see Sparklines-like histograms of reviews.
On a related note, for binary (good/bad) reviews, we can the temper the results by assuming a beta(1,1)-distributed prior and then updating [1]. The expectation (which can thought of as an "adjusted average rating") ends up being:
E[x] = (good + 1)/(good + bad + 2)
This adjusts for situations where the number of reviews is small, in which a good average rating can be misleading.
"If Newton's Principia were published today, it would have 4 stars on Amazon. There would be one cluster of 5 star reviews by people saying it had revolutionized their thinking, and another cluster of 1 star reviews by people complaining it was pointless and hard to read."
and then someone linking to actual reviews on Amazon (avg. rating 4.1).
Especially since one of the books he mentions, Steven Pinker's How the Mind Works, has a 3.97 - which just barely misses the criteria, and yet has a big fat red X. He gave rated it 2 stars, so this seems to be confirmation bias more than anything.
The rule reminds me of the hiring mentality of "it's worse to make a bad hire than to filter out a good hire."
There are so many books available to me that I do use Goodreads ratings as one heuristic for choosing what to read. It would take a lot for me to read a 2 star book (like a very strong recommendation from someone I trust), not because I am sure it wouldn't be worth my time, but because there are plenty of more highly rated books that are more likely to be worth it.
The author even qualifies the rule: "Unfortunately, Goodreads ratings are often not unbiased.."
Most people rate books not based on rather they are good books, but rather if they enjoyed the books and believed in what it was saying. Furthermore, there is a certain demographic of people who uses and rates nonfiction books on goodreads, which will skew the rating.
> Reading only very popular books is a recipe to read only books that fit into your own preconceived worldview.
... under the assumption that everybody who rated the book shares your preconceived worldview. Assume that not to be the case on Amazon.com (or similarly large review platforms).
I think it's more laughable for fiction books and less laughable for nonfiction books. Nonfiction books are like tools to me: they either perform or they don't.
I found The Lean Startup relied heavily on anecdotes. The article gives it 5 stars, despite this being one of their main complaints about other books.
To give one example, the book talks about how in their 3d virtual chat game, IMVU, as a shortcut, they initially had the characters teleport around because they didn't have time to implement walking animations. They got some positive feedback from users about the teleporting "feature", and concluded teleporting was a great selling point and they shouldn't implement walking. The book then starts teaching what lessons you should take from this story.
However, around the same time they were making IMVU, Linden Labs made Second Life which did have walking animations and I believe was more successful than IMVU.
It was a common pattern in The Lean Startup to point to a single example that the author thinks worked out well for IMVU and extrapolate advice from it while ignoring any counter-examples.
The point of The Lean Startup isn't to give you advice to run a successful VR/gaming company, though. It's to teach you that your assumptions are probably wrong, and you should figure out ways to test them and get feedback as quickly as possible. It's easy to run a bad test or misinterpret your results -- which is I think what you're getting at -- but it's still important to test things, and at least for me, it seemed like that was the main takeaway from the book.
I think part of why I didn't enjoy The Lean Startup was because I read it too late. By the time I read it, MVPs, "fail fast", and "make something users want" had been the motto of startups for years, and the points the book was making just seemed obvious. Maybe that wasn't obvious in 2011. (Although Paul Graham's essays have been saying this since at least 2005 and are a much better read IMO.)
But the other part was that it was just poor science, relying almost entirely on anecdotes for its claims. Which is not to say his claims are necessarily wrong, just that for any claim that isn't obviously true, there's no convincing data to back it up. There's usually just one success story where it seemed to work, and even within the story, it's hard to know if the strategy was actually successful. Like, as I mentioned before, the book passes off learning the users preferred teleporting to walking in IMVU as a success story, but it may have in reality been the inferior choice. We can't know for sure, but we do at least know other more successful virtual worlds have walking. IMVU never even tested walking. It undermines his credibility when he seems oblivious to his own possible failures. It's a lot of, "this is what we did, and I think it worked out well". Despite advocating split testing, he never split tested the techniques he advocated. There's no control - no baseline to compare to.
Or when he prefaces the chapter on small batches with a third-hand story of a father and two daughters who had to address, stuff, and seal a stack of envelopes. The daughters, aged 6 and 9, felt it would be faster to address them all first, then stuff them all, then seal them all. The father thought it would be faster to do them one at a time. So they each took half the envelopes and the father won.
I enjoy stories, but a 3rd hand anecdote about a father stuffing envelopes faster than two children is just... why? That's not going to convince anyone. He then calls back to this example several times when explaining how to apply this at a software startup (release frequent small updates, use continuous deployment). But even if one-at-a-time envelope stuffing is faster, and the startup advice is right, one does not imply the other. Just cite an actual study, or do an analysis of what was successful at other companies.
Are you under the impression that authors create indexes? Because generally, they do not. Specialists do it (because it's way harder than it looks). Some academic publishers might ask the author to pay for that work; bigger publishers might absorb it. But I can't imagine any respectable publisher accepting an index from an author who did it themselves.
I don't get how this is some big black/white issue. You don't have to finish every book you start, nor do you only have to read a single book at a time, and you can listen to audiobooks at high speeds if time is your main concern. Because in the time it takes you to "research" whether a book is worth your time or not, you could've already read a couple chapters of it and gotten enough of a broad overview of what it contains to decide for yourself.
I've managed to read about 60-90 books a year for the last 6 years, and that's only counting the ones I actually finish. Yet I'm sure the tally of books I've only started/skimmed would be significantly higher than that if I bothered to keep track of it.
It's a big issue because "There are somewhere between 600,000 and 1,000,000 books published every year in the US alone, depending on which stats you believe." - Forbes.
You aren't going to preview / skim literally millions of books which exist to find the ones worth reading. You can't.
I've managed to read about 60-90 books a year for the last 6 years
That's, what, 8 hours worth of book publishing in the USA, in 6 years?
> in the time it takes you to "research" whether a book is worth your time or not, you could've already read a couple chapters of it
Meaning that there would be no actual trade-off in the time invested for you, to take a slightly less superficial approach when evaluating books.
To compare the approach I proposed to a literally impossible task of evaluating every single book ever written (as if anybody would actually even be interested in reading all that) is quite disingenuous. This entire HN thread and article are already excluding most books in existence anyway, by the simple fact that everyone here has been mostly talking about "english non-fiction" books specifically. They may still tally up to a large number, but there's no point in exaggerating their quantity when we're all still just as hopeless in trying to read them all anyway. All you did by pointing out this impossibility is just needlessly restate the obvious, and has nothing to do with what I said.
I'm already more than satisfied with the amount of books I read, and get a lot out of them without experiencing any existential anguish over it, so I don't understand why you're trying to throw my reading habits back at me as if they're "not good enough" or something all of a sudden. Or is that what reading is about these days? a mere measuring contest? Am I supposed to feel ashamed I didn't meet some internet stranger's arbitrary criteria for a habit that's supposed to benefit me? I proposed a viable alternative for evaluating books that works more efficiently for me than the one presented in the article, in case others are unsatisfied with their current reading habits. I didn't claim it was something that was going to win you the "reading olympics". If anything, I clearly supported the opposite by emphasizing that people should feel less obligated to finish the books they start, because feeling like you have to finish them is just going to cause needless anxiety about the way you've invested your time.
Besides, who cares if your entire lifetime of reading could've been published in a day? Is your goal in life to out-pace authors and publishers, or is it to get fulfillment out of books? Because in my experience, reading a single chapter out of a bad/mediocre book is a lot more fulfilling than reading a bunch of amazon/goodreads reviews.
I think "intersting" is often a bad rule too, especially if you've just begun reading. At the beginning, you don't have any kind of benchmark or reference for checking if what you're reading is good or not, and will therefore most likely accept anything as truth or insight.
If you've read enough on the other hand, then sure, it might work. Might in the sense that with some luck and taste, you'll find the good books. But it's still reliant on taste, which is very, very biased. Anything that looks uninteresting, but is awesome will be missed.
> Malcolm Gladwell’s books are enjoyable—there is no denying he tells compelling stories—but ultimately they rely heavily on anecdotal evidence. In general, I find that books by journalists cherry-pick examples to fit the argument, and gloss over (or completely ignore) evidence to the contrary.
Addendum to rule 1: Browse through the bibliography.
Quick but effective heuristic to gauge how serious the book is, and how well the author knows his field. If there isn't one, don't bother buying it. If it's a 20 page list of citations from reputable journals (or original sources, if you're reading a history book), then the author doesn't have to be a professor to be believable.
I would add #4: books that are at least 10 years old. I feel like knowledge has to pass the test of time. This rule would rule out "Why We Sleep" for instance, but you'll be able to read it in 10 years if the science behind it still stands.
Terence McKenna's suggestion was "if a book is less than 100 years old, don't read it".
I think it's a combination of "contemporary books will be from the swirl of life around you that already know, old books will be from a different culture and time and that difference is important" and "knowledge lost, from people who are no longer living".
It would also have the test of time - Darwinian natural selection of information. Something we're missing when we want the internet to keep data forever, we should be letting data rot and be lost, taking ongoing action to keep it in the present is a vote for its importance, setting up a system which preserves information without effort is cheating the system and leaving us swimming in e-waste.
I feel like every book is going to be contested both in the long and short terms. Instead of taking a book at face value, it is important to subscribe to the conversation around it that will inevitably follow.
This seems a bit silly. An analysis of British/EU relations that was written 10 years ago is going to be missing some pretty relevant things that have happened since then for instance.
I like reading non-fiction. I think my rough heuristics for picking books are:
1. Aim for topics I was already interested in before I knew the book existed. There is something to be said for books so good they draw attention to the topic, but that also increases the odds of the book's rating being based on popularity and not quality. I'd rather read a good book about niche topic that is particularly meaningful to me than a better book that just happens to hit the zeitgeist. In other words, I pick a topic and hunt for books.
My most recent non-fiction books were Derek Wu's book on Spelunky, Chapman Piloting & Seamanship, and Thinking with Type. None of those are going to make Oprah's Book Club, but all were very enchriching for me because they aligned with areas that matter to me.
2. Aim for books that are "canonical" according to people in the field. Signals for this are lots of reviews, especially many reviews over a period of years because that shows consistent relevance. When people I respect that I share interests with mention a book, that's a strong signal. When reviews of other similar books mention as a point of reference, that's a strong signal.
3. Read a few pages and judge the quality of writing. I have read very very few books where the underlying concepts were valuable enough to be worth wading through bad writing. On the contrary, my experience is that deep, clear thinkers produce quality at all levels of exposition. They wrap their good ideas in good chapters with good paragraphs full of good sentences. Life is too short for shitty prose.
I also have a simple rule for how to increase the quantity of non-fiction I read: Put it in the bathroom and don't bring my phone in there. You'd be surprised how much text you can get through one poop at a time. This is great for books where reading it is not super engaging but I want to have read it.
Worth noting that Nassim Nicholas Taleb has some interesting explanations about why practitioners write better and more accurate books than journalists in "Skin in the Game". As you probably guessed from the title, practitioners have a lot to lose by writing a bad book about their practice because it impacts their livelihood. Journalists - while they do face some consequences for bad books - don't pay the same price, and are more likely appealing to a general audience when they write a non-fiction book.
I would argue that goodreads is not a particularly good way to determine whether a non-fiction book is worth reading. goodreads reviewers are just regular people who evaluate books based on how much they enjoyed them, not how accurate of useful the books are.
If I'm looking for a book I want to learn from, I scout recommendations from journals, magazines, blogs, respected radio programs/podcasts. End of year "best of" lists are also a good source of ideas. Then I read some in-depth reviews of books that strike my fancy by reviewers who have reason to know what they're talking about. Many of these have 4+ star ratings in goodreads, some don't.
Also, a blanket rejection of books by journalists or other non-experts is going to lead you to miss some really good books. That rule immediately called to mind Tracy Kidder. So, "The Soul of a New Machine" is off limits. Really? No thanks. I can think of many others.
Although I haven't read Tracy Kidder's "The Soul of a New Machine", Google categorizes it as Biography, so if you buy that then by the rule of thumb it would not be off limits. And thanks, I have added it to my to-read list!
Tracy Kidder was the immediate counterpoint that came to my mind, as well. "The Soul of a New Machine" probably is (or should be) on every HNer's bookshelf. "Mountains Beyond Mountains" was also a fascinating read about an entirely different subject -- a doctor's humanitarian work.
Also agree that Goodreads reviews require a healthy dose of caveat lector.
A trick I use to "read" books much faster is to feed them through Google's TTS but set a much higher than normal read speed. I call it robotting a book.
It takes some getting used to and requires more focus but it's very efficient in terms of information dump. I've been able to increase the speed over time. My wife thinks I'm crazy. I use headphones.
I do this with informational youtube videos. I crank the speed to 1.5-2x depending on how fast the presenter is talking, but I push it to the point where I feel like I'm just barely keeping up and I try to get faster over time where I'm setting more and more videos to 2x rather than 1.5x.
I use the "video speed controller" extension to set the speed in increments of 0.1x. Works on any HTML5 player (netflix, youtube, most random websites).
Nice, I figured I'd probably get some of these suggestions posting this on HN, thanks! I'll check these out. I've been meaning to find something that will let me go past 2x.
Ya I started doing this while on a bicycling tour and on Air planes. Both situations I can give my brain lots of attention on absorbing the information at a faster rate because my other information inputs are not as busy.
The air plane is easiest for 2* speed because I can close my eyes which helps me focus on the sound even more.
There's a blind developer giving a talk on Visual Studio accessibility here - https://www.youtube.com/watch?v=94swlF55tVc - and you can hear examples of him using text to speech starting at about 0m 45s but he doesn't say how fast.
Interesting post, but i would say you can easily read more than 3000 books if you retire at 65 and live to be 90+. Also there are great non-fiction books by journalists that are not about journalism, a few that come to mind are Bad Blood, The Chickenshit Club, Angler, All The President's Men, books by Matt Taibbi, etc.
At least some of the books you mention arguably can be (and are by Google) classified as biography, so they would not be removed by the rule of thumb. But perhaps a better way of phrasing the rule would be "Don't read books by journalists on technical subject matter if your intention is gaining an understanding of that subject". If you want to read journalistic works like Bad Blood because it's a heck of a story, then that's a different matter. I am still failing to find a succinct way to express this thought though :)
For similar reasons to the "time value of money" concept, books read now are worth more than books read in the future. e.g. reading GEB at 20 and having it change the course of your life to become a Computer Science Ph.D. is very different from reading it on your death bed.
Moreover, 3000 books is still only scratching the surface of all non-fiction ever published in English (let alone other languages), so strong heuristics will still be needed.
For lack of ideal heuristics, I recently built myself a tool to find books that "people like me" read most. This is similar to the rating heuristic, except it relies on a collaborative filtering & clustering of sorts. The internet is full of people who have reviewed and curated lists of books, books that can thus be associated with each-other. And so now I can ask my tool the simple question: For people who enjoyed these N books, what other books did they most enjoy? This is something I've found incredibly lacking on the likes of Amazon/GR/etc. recommendation mechanisms, hence why I built the tool. I won't plug it here since I don't want to be spammy but I can share directly with people if they'd like.
I do think these are some good guidelines that are definitely worth at least considering if you're struggling with choice. One other method I've used to great effect is to read books that I see cited at least 3 times in 3 different places. So if I'm interested in or researching a topic and I see a book or work cited (or recommended) by multiple independent sources, I'll add it to my list. It really helps me find a lot of non-fiction/research-based works with generic titles that I may not come across otherwise.
Obviously, YMMV. This can lead to issues (for example, ideological hegemony in a lot of the fields I'm researching in) and it's by no means the only method I use to pick works. But it has been really helpful for me!
#2 shouldn't be a hard and fast rule, especially, with respect to textbooks and, especially, when it comes to subjects like math. A student rating a phenomenal book 2 stars because "...it was assigned and we had to buy our book. But the book is too hard to understand, explains jack shit and totally worthless. I give it two stars only because I bought it at 1/4 the price on sale, otherwise this book barely deserves 1/2 star at most. Believe me, I am not dumb - I sometimes even read straight out the textbook." is not rare.
I read comments only after I'm done with the book.
I think the journalist rule is good for books that go deep into a topic, like many of the ones you've cited here. On the other hand, I would expand your caveat around biographies to say that journalists are generally very good when it comes to writing about events.
Barbarians at the Gate is a good example of this - it required authors who were able to dig deep into _what_ happened. There's obviously some info on why things happened there as well, but the primary purpose of the book is to inform the reader of what occurred, which is a good use of the journalistic skill set.
Paul Fussell's book, Class: A Guide Through the American Status System is one of my favorite non-fiction books. I consider myself lucky to have discovered a copy at a used book store 20+ years ago. Unfortunately, it only has a 3.95 on Goodreads, so I guess it's going in the trash.
The Empathy Exams by Leslie Jamison only has a 3.63 and I don't think she's really an "expert" in any of the specific topics she writes about. It's brilliant ...but maybe I can still return it?
This system is a great way to hyper-optimize for narrow-mindedness.
My experience with the Very Short Introduction series is mixed. I kind of wish there was more editorial oversight to ensure that the style, content and pacing are more consistent from book to book.
Some books I've read don't really follow the spirit of "Very Short Introduction" and are rather dense. The quality of writing also varies quite a bit.
Overall though, they are a good first stop if you want to get basic familiarity with a topic.
I'm actually a fan of dense. If I have to reread a well written paragraph five times to understand that's fine by me. When I'm reading serious non-fiction (currently Predicate Calculus and Program Semantics by Dijkstra and Scholten) I consider it normal to just make it through a handful of pages per day.
The problem I see with these rules is that you're only going to read new books.
I use a heuristic that the best book about a given topic was most likely written half as long ago as people have been thinking about that topic. People write more books now, but they get less interested in old topics as time goes by, and those effects tend to cancel out.
Ratings aren't fungible from person to person, so I reject the whole "4.0 rating" thing. A book could have a 2.0 rating and I might still find it useful. Generic ratings like that are, at best, a VERY weak signal, and certainly not something I'd ever incorporate into an ironclad rule.
For non fiction I only read what was directly recommended by somebody whose taste I already know. It takes discovery out of the equation and I definitely miss good books, but since I do not read that many non-fiction books this method is highly efficient.
I guess many journalists view themselves as the ideal outsider. If you deny them a place on your reading list (or demand that they only write about their job) you deny that whole vocation. The world would be a poorer place if we all did that.
#1 is kinda tricky to apply to books that take down pseudo-science. In those cases, who is and isn't an "expert in the field" (and whether the field even meaningfully exists) is exactly the question.
I would recommend still reading books written by journalists since some can be very good. But for these books adjust your Goodreads rating threshold to something more like 4.2 or 4.3 for journalist-written books.
My rules: be interested in some topic, ask around for recommendations, then pick one or two of the recommended titles based on how people described them in the responses you got.
Do we really need to read a book a week ? I mean just reading a book once wont really help much. Isnt it better to read less but read those books deeply ?
Sounds like what humans do. We notice features in our world that we're looking for or just learnt about, patterns we've noticed before, things we're attuned to, things that produce emotion - these seem significant - and don't notice others at all. (This is called "cherry-picking" when other people do it.) Over time, we come by a lot of evidence for our private pet theories. Some people write that all down and make books from them. Good luck to them.
I love reading nonfiction. But, as I am sure you can relate, I only have limited time. Even if we were able to make enough time to read one book a week on average—certainly not the case for me right now—we would still only be able to read around 3,000 books in our entire adult lives. At my current pace, the real number will likely end up being far lower.
Rules restrict the diversity of your learning and restricts your knowledge.
You don't have time for everything, so it's good to have rules. But always remember that on occasion you need to break your own rules otherwise your knowledge becomes biased.
My extra rule to overcome this bias is. Anyone who recommends me their favorite book I will read it no matter how stupid I believe the recommendation is, no matter how many rules it violates.
I found the DaVinci code this way. That book was a huge mistake, but still I now have actually read the book and have the definitive knowledge to know it's a mistake.
[EDIT: the post says "Prefer books by experts in the field" and says these are people who have spent their lives researching that field. It gives GEB as an example of such a book. That claim is factually incorrect and calls into question the idea of requiring books to meet that criteria. Does anyone of the many people who've downvoted my comment care to explain why you find it objectionable?]
> The best nonfiction books I have read have invariably been by folks who spent their lives researching that particular issue. A couple of books in this category immediately come to mind: Why We Sleep, The Language Instinct, Gödel Escher Bach.
"The idea that changed Hofstadter’s existence, as he has explained over the years, came to him on the road, on a break from graduate school in particle physics. Discouraged by the way his doctoral thesis was going at the University of Oregon, feeling “profoundly lost,” he decided in the summer of 1972 to pack his things into a car he called Quicksilver and drive eastward across the continent. Each night he pitched his tent somewhere new (“sometimes in a forest, sometimes by a lake”) and read by flashlight. He was free to think about whatever he wanted; he chose to think about thinking itself. Ever since he was about 14, when he found out that his youngest sister, Molly, couldn’t understand language, because she “had something deeply wrong with her brain” (her neurological condition probably dated from birth, and was never diagnosed), he had been quietly obsessed by the relation of mind to matter. The father of psychology, William James, described this in 1890 as “the most mysterious thing in the world”: How could consciousness be physical? How could a few pounds of gray gelatin give rise to our very thoughts and selves?
Roaming in his 1956 Mercury, Hofstadter thought he had found the answer—that it lived, of all places, in the kernel of a mathematical proof. In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” He sat down one afternoon to sketch his thinking in a letter to a friend. But after 30 handwritten pages, he decided not to send it; instead he’d let the ideas germinate a while. Seven years later, they had not so much germinated as metastasized into a 2.9‑pound, 777-page book called Gödel, Escher, Bach: An Eternal Golden Braid, which would earn for Hofstadter—only 35 years old, and a first-time author—the 1980 Pulitzer Prize for general nonfiction."
What is your point? When Hofstadter was 35, he had spent his adult life studying a subject which culminated in the publication of a book about it. He wasn't passively throwing words on pages to get anything published, he was publishing his life's work.
You seem to be trying to argue that -- because he continued to live afterward, he somehow didn't spend his entire life on the book. But that's not what was being asserted -- only that he had spent "his life" up to the time of the work's publication on the work. And it's not even relevant if it were meant as you seem to be assuming, because Hofstadter has continued to study the very same subject ever since. He's also published more books on the very same matter in the following years. It was his life's work then, and it continues to be today.
> When Hofstadter was 35, he had spent his adult life studying a subject
First, a nitpick - the book was published when he was 34, and he was even younger when he wrote it.
But the main point is that the post is clearly talking about people who have spent the entirety of a lifetime studying an area, not someone who has spent their adulthood so far studying the subject.
And Hofstadter clearly hadn't devoted the entirety of his adulthood up to age 34 studying the subjects of GEB. The quoted passage says that his formal study had been in particle physics.
I disagree with your assessment of what the main point is. I don't think anyone else is interpreting it that way, and can confidently assert that the error is entirely on your end.
Moreover, Hofstadter's study of physics isn't unrelated to his study of intelligence. I personally got a degree in physics to study language and intelligence, because the way physicists use language, analogy, and simple concepts to understand the world is particularly effective and interesting. So I can tell you first-hand that they are related. In fact, anyone who studies philosophy is probably making a serious mistake to not study physics first.
> I don't think anyone else is interpreting it that way, and can confidently assert that the error is entirely on your end.
The post is very clear that it's talking about expertise in the sense referred to in my comments (which, as also indicated in my comments, I don't fully agree with):
"Rule #1: Prefer books by experts in the field
The best nonfiction books I have read have invariably been by folks who spent their lives researching that particular issue. A couple of books in this category immediately come to mind: Why We Sleep, The Language Instinct, Gödel Escher Bach.
Positive indicators of this in a blurb may include “Professor in [field directly related to the book’s topic]”, “Long-time researcher in [field directly related to the book’s topic]”.
Note how they say "Professor in" and "Long-time researcher in".
The way you're using a term, a 21 year old can have "spent their life researching the topic" if they've been focused on it over the previous three years.
You don't need to focus on these kinds of semantic quibbles if you want to miss the point. You can do that directly.
Albert Einstein was 21 when he published his first paper, and 26 when he published his best. Please tell me that he didn't spend his life researching physics, even then.
I don't think these are very good rules.
I thought it was especially amusing that this person doesn't trust journalists to write about anything but journalism. If you believe that, why are you interested in reading about journalism in the first place?
HN has an unfortunate fixation on Michael Crichton's supposed Gell-Mann amnesia effect (wherein you read something in the news that pertains to your field, spot errors, get angry, and then forget that happened when you go on to read the next story). I propose the countervailing Djikstra amnesia effect, wherein a technical professional produces workmanlike output with all the attendant errors and omissions that attach to any work produced by humans (Christ knows our field is intimately acquainted with errors and omissions), then forgets they did that, and expects every other professional to measure up to the standard they themselves failed to meet.