Hacker News new | past | comments | ask | show | jobs | submit login
We found no evidence that Bionic Reading has a positive effect on reading speed (readwise.io)
211 points by hoodwink on July 19, 2022 | hide | past | favorite | 139 comments



can we kill this entire fad of speed reading and productivity life hack nonsense already? I propose we start to advocate slow reading in the spirit of Peter Norvig's learn programming in 10 years, and tell people to actually enjoy what it is they're reading.


I disagree. There are things I want to learn and I really don't care for the fluff.

Ironically (or not?) self-help books are absolutely full of fluff but at their core can have very useful/helpful ideas. They just get expounded upon back up with 10 specific examples where each example starts at the beginning of the person's life in full detail.

Speed reading has helped me get through those books and learn something where I would have wasted so much time or actually never even picked the book up.


> Ironically (or not?) self-help books are absolutely full of fluff but at their core can have very useful/helpful ideas. They just get expounded upon back up with 10 specific examples where each example starts at the beginning of the person's life in full detail.

That's not fluff. It's an artifact of people being different and the goal of the book being to connect with someone and make an impact. A list of pithy principles would have no impact and make no connection. The exposition can do that (e.g. relatable story, example of an application that's close enough to the reader's circumstances), but not all of them will connect with every person.


> A list of pithy principles would have no impact and make no connection.

In case anyone wants to put it to the test, here is a book consisting of precisely that (by Rochefoucauld, from the 1600s): https://www.gutenberg.org/files/9105/9105-h/9105-h.htm#linkm...


Maybe "no impact" was overstating it a little bit, but I think the point still stands. They often have far less impact when they're not reinforced by the right story or example, but that story or example won't be the same for everyone.


How could an author who knows nothing about you possibly be better than yourself at construing an example applicable to you?

It then seems to me these kinds of books must be made for people who want others to think for them.

So maybe all these books should say is: "Think. Your problem is a lack of aforementioned activity. If you need some food for thought here's a list of ten pithy principles. Flip to page two for an afterword by my publisher."

Of course that wouldn't be good business sense. Why show someone the spring when you can sell them water?


> How could an author who knows nothing about you possibly be better than yourself at construing an example applicable to you?

He can't. The author can't predict which story or example will connect with you, so he includes many that connected with someone, with the hope that some fraction will connect with any given person. That's why people complain about "fluff": they're annoyed but the stuff that doesn't connect with them, but they haven't thought about it beyond their own personal experience. Maybe they could redact the book down to 10 pages, but so can everyone else, and all the redactions would be different.

> It then seems to me these kinds of books must be made for people who want others to think for them.

You're being uncharitable and kind of conceited. Would you say Calculus textbooks are for people who want others to think for them? Do Real Men take a short primer on mathematical logic and the axioms of ZFC set theory, and go derive Calculus for themselves?


> they're annoyed but the stuff that doesn't connect with them, but they haven't thought about it beyond their own personal experience.

Emphasis mine.

> Would you say Calculus textbooks are for people who want others to think for them? Do Real Men take a short primer on mathematical logic and the axioms of ZFC set theory, and go derive Calculus for themselves?

My response was argumentum ad absurdum, so I gain nothing from defending the position, but still:

Walter Rudin's Principles Of Mathematical Analysis would be a terrible book if it tried to relate the matter's purpose at every step. Nobody expects a mathematical textbook to do that.

In fact Principles Of Mathematical Analysis is a great book for being extremely concise and containing just what is necessary for a reasonably intelligent reader to understand the material.

To be more clear, I don't believe that your explanation could possibly be a good reason for these stories being included in a self help book, but without precluding they may serve another purpose.


> ...just what is necessary for a reasonably intelligent reader to understand the material.

What about readers who aren't "reasonably intelligent" (which often means "quite a bit more intelligent than the typical person")?

> To be more clear, I don't believe that your explanation could possibly be a good reason for these stories being included in a self help book, but without precluding they may serve another purpose.

What's the reason for your emphasis there? Are you reading the "self" part too literally or idiosyncratically? IIRC, the "self" just means the book is meant to help the reader with his problem without personal guidance from some professional. It doesn't mean the reader is supposed to figure it out on his own.

As I've gotten older, I've gotten more wary of certain biases that engineer-types often tend to indulge in. One of them is along the lines of "I'm so smart, I think I can figure it out on my own, therefore everything I think I don't need is unnecessary." Another is temptation to confirm one's intelligence by seeing the "real" reason as some cynical ploy that works on lesser people.

Also, I'm not saying every self-help book is good, or that it never happens that examples are truly just padding. It's just that there's good, non-cynical reasons to not to reduce everything down to some pithy list of axioms, and I know for a fact that at least one well-regarded one is structured that way, and it was a bit of a slog because of all the examples and stuff that didn't connect with my particular circumstance (but there's no way for someone I never met writing a couple years before my birth to tailor anything to me).


People can’t think deeply about everything they do.

Examples are shortcuts find relevance. Once I’ve decided a topic is relevant to me then I think deeply about it.


Variable-speed reading. Quickly skim over the fluff and slow down for the actual information.


Agree. Even beyond enjoyment I’ve heard multiple very accomplished people in a variety of fields (from theology to math) say in interviews “I’m actually a very slow reader.”

So sure reading fast would be nice, but I think that’s mostly biological whereas slow quality reading you have full control of and we need to focus on that instead.


Ok, but… I love being able to speed read. Love it. And I can’t speed read many things, like philosophy or “zorba the Greek.” Lol just kidding I totally speed read that shit (but slow down when necessary)


“Some books are to be tasted, others to be swallowed, and some few to be chewed and digested; that is, some books are to be read only in parts; others to be read, but not curiously; and some few are to be read wholly, and with diligence and attention.”


Well there is some stuff you want to enjoy and there's other stuff you want to process as fast as possible so you get more time to read the stuff you enjoy :)


As a life-long speed/skim reader, 100+

It's been increasingly pointed out to me that my fiction reading habits, which permit enjoyful re-re-re-reading (the constant surprise of things missed each prior reading) are anti-patterns for legal or technical writing: in these cases reading deep, slow, is far better.

There's nothing wrong with re-reading per se. It's about context.


> in the spirit of Peter Norvig's learn programming in 10 years, and tell people to actually enjoy what it is they're reading.

Not everything should be enjoyed though and that needs to be better known lest people feel they’re doing it wrong when reading skimmable material back to back.


Here’s what appears to be an independent review of Spritz:

https://www.tsw.it/wp-content/uploads/Rapid-serial-visual-pr...

One of the issues with tests like these is that the companies sponsor the research and they are one-off vs long exposure studies; that is as with most interfaces with high-throughput, takes time for the mind to adjust.

My guess that given human speech appears to have a universal transmission rate of around 39-bits per second, that on average that’s going to be the actual performance target:

https://www.science.org/doi/epdf/10.1126/sciadv.aaw2594


Blind people who use screen readers usually turn up the speed to something unintelligible to the rest of us.

Maybe reading a computer screen is a simpler task than talking person to person, but it's an interesting datapoint!


I am definitely not blind. But i listen at high speed to audio books and podcast all day at work, I average at around 2.5x/2.8x speed (lower for certain narrators and can get up to 3x + speeds if i am not doing something that requires much of the language processing part of my brain or anything mentally taxing) i could drive or play a non-text/plot heavy video game but not talk or problem solve.


While I agree both the blind and others, including myself, listen to screen readers and audio at a higher rate, that’s not actually research that’s reviewable and shareable. Speaking for myself, 100% sure the noise-to-signal ratio increases when I do, but if needed, I just go back X-seconds in time to relisten to prior audio. I for sure never literally test my comprehension systematically when doing so relative to when I am not. The speakers, vocabulary, topic familiarity, etc — also make a huge difference; aka prior familiarity in general with the input.

On the flip side, I provided research on transmission rates, which to me seems reasonable, but another user shared research on reception rates, which to me is unreasonable:

https://news.ycombinator.com/item?id=32160095

To me, I am interested in notable, reviewable progress in understanding the topic — not chatting about it around an internet campfire.


Not blind, but I often watch youtube videos from 2x - 4x speed, only going below 2x to review complicated information (pausing a tutorial to read the text/code/image on the screen) or if the timing of information is part of the information (music, comedic timing, etc...)


> human speech appears to have a universal transmission rate of around 39-bits per second

This is unacceptable, we need a modern language that packs information tighter!


I think this is in jest, but to answer seriously: No, I think that's actually backwards, trying to make a language more "packed" will harm information flow.

Evidence suggests the real limit is how quickly human brains can take ideas/qualia, convert them into abstractions, and encode the abstractions into language. This is because (A) very different languages still exhibit similar limits and (B) those limits appear to be governed by the sending-side. People can comprehend spoken words at a higher rate than they can spontaneously emit them.

So trying to make the language more "compact" would likely just waste precious brain-cycles on the compression step, which isn't actually necessary when your mouth already supports talking faster.

Technology analogy: Two computers are collaboratively solving a problem with back-and-forth messages. The network connection is actually very good, however the bottleneck is the CPU in each computer. Will the problem be solved faster if you change the transmission style from plaintext to gzipped?


The other problem with this idea is that natural languages contain quite a bit of redundancy, which is why speech and writing can be decoded even over very noisy channels. It’s hard to make language more compact without removing some redundancy.

That being said, this has been attempted before: see https://en.wikipedia.org/wiki/Speedtalk and https://www.zompist.com/kitlong.html#howmany, and especially https://web.archive.org/web/20000503004430/http://fatmac.ee.... for a detailed attempt.


A related rant I've had over the years: the redundancy present in human language is not a flaw to be optimized away. Evolution went there because it acts as a form of forward error correction.


Except when the complex houses married and single soldiers and their families. :P


Wow, I've never seen such a good one. I spent a solid minute frozen in this garden!


That's not redundancy, that's multiplexing!


> People can comprehend spoken words at a higher rate than they can spontaneously emit them.

Like so many things with people, it depends on what they practice. Most people practice listening far more than rapid speech. Some people can speak faster than most people can comprehend. Part of learning to do that may involve not listening (to themselves) in the same way.


> So trying to make the language more "compact" would likely just waste precious brain-cycles on the compression step

If you learned this new more compact language as your native language or to native fluency, I don't imagine you would need to go through a compression step, since all of your thoughts would already have happened in the compacted form.


That raises the question of why it seemingly hasn't happened already.

One option is that what you're describing just isn't possible, that humans are already butting up against some kind of limit which is not avoidable simply by being raised with a different language.

Another option is that one can be raised to think in "pre-compressed" structures, but nobody does because it's a bad tradeoff, dropping "general thinking" performance with a worse impact than any "faster speaking" benefits. (Such as being simply slower, or more error-prone, or more demanding on attentional resources, etc.)


>> “People can comprehend spoken words at a higher rate than they can spontaneously emit them.”

If you have any links to research supporting this, I would be interested.


A Large Inclusive Study of Human Listening Rates [1] focused on finding optimal generated-speech rates for screen-reader software:

> The mean Listening Rate was 56.8, which corresponds to 309 WPM. Given that people typically speak at a rate of 120-180 WPM, these results suggest that many people, if not most, can understand speech signifcantly [sic] faster than today’s conversational agents with typical human speaking rates.

While there was some difference between the visually-impaired and normal-vision respondents, I don't think it's enough to matter to the thesis of my post:

> [...] The mean Listening Rate for visually impaired participants was 60.6 (334 WPM) while for sighted participants it was 55.1 (297 WPM).

1: https://dl.acm.org/doi/abs/10.1145/3173574.3174018


Thanks, reviewed research, unclear though how a synthetic voice reading off a single word per unit of measure is an accurate gauge of average listening rates.

From page 4 of 12 of the PDF you linked to, “Rhyme test: measures word recognition by playing a single recorded word, and asking the participant to identify it from a list of six rhyming options (e.g., went, sent, bent, dent, tent, rent). We used 50 sets of rhyming words (300 words total), taken from the Modifed Rhyme Test [27], a standard test used to evaluate auditory comprehension.” The word list research used is on page 30-31 of 55 of this PDF:

https://www.researchgate.net/profile/Michael-Hecker-3/public...

If true, test appears to not even measuring word recognition, it’s more accurately measuring a single phoneme recognition. If the listener correct picks the correct phoneme from a multi-choice list, researchers assume person would hear and understand 100% of any expressions received at that rate of speed; which in my opinion is clearly flawed.

I would be the first to agree that testing listening comprehension rates is hard to do, hence why I asked to review research, but to me, unless I am misunderstanding something, unclear how this research actually provides any meaningful observations.


I know you're probably joking, but for those out of the loop, the result your parent was referring to is that more verbose languages have faster speakers and the two effects compensate one another pretty well.

https://www.science.org/doi/10.1126/sciadv.aaw2594


Not a language, but a font: https://dotsies.org/

Discussed in HN multiple times https://news.ycombinator.com/item?id=18703805


No, latin is the only language we'll ever need! It is much closer to the fundamental truth, unlike these modern languages that make unnecessary abstractions of what is really going on at the low level.


We need to start in schools, teaching our children to speak in zstd. It could more than triple their throughput!


I wonder if the bottleneck is in speaking or listening.


The article’s conclusion doesn’t seem to match the data they collected. They found that most people (52%) do read faster with bionic reading, the effect is generally quite small, but large for some minority of users - at least one user read 293 WPM faster with Bionic Reading!

The authors then average results (good and bad) across all users, resulting in a number close to zero, and conclude Bionic Reading doesn’t work for anyone, even calling it a placebo effect at the end.

The problem is, all brains are not the same. It doesn’t have to work equally well for everyone to be valuable.


People collect all sorts of fascinating data but then analyze it so superficially, throwing very low-hanging fruit away without a thought. The biggest loss is information about individual variation.

Why do we only crunch data down to averages? Differences in group variation can be measured just as rigorously as differences in group averages. Repeatedly testing an unusual individual is perfectly legitimate and scientifically interesting. There is no Law of Science that says you have to execute a boring protocol that rigidly assumes everyone is the same.

There's a classic story about how the Air Force learned that no one is average, and switched from fixed "average" cockpit fittings to adjustable ones, greatly reducing accidents. But it seems like no one has learned from this.

For heaven's sake, let's try to learn about individuals and what works for them!


The results are what you would expect if you measured two identical fonts, ie random.

You can make the _claim_ in your comment, but there isn't statistical evidence that this effect is more than random, or if the same people were tested again their results wouldn't be opposite. Put another way, it doesn't pass the null hypothesis.


I think the best approach would be a graph showing the distribution of users across different WPM differences, and if possible a second experiment with two non-BR fonts (to see how much random variation is expected). The positive results might all be noise, but I don’t think that should be the default assumption.


FWIW, the article does include some data on the distribution, as well as address your exact objection:

> Since posting this experiment, I've received a lot of side comments along the lines of, "Well, of course I don't expect Bionic Reading to work for most people, but for [my subpopulation], it really works." If that were the case, we might expect to see disproportionate benefits for those participants who read faster with Bionic Reading than for those who read faster without Bionic Reading. Let's look at how many participants read faster with each font and their average speed gains.

Table 3: Summary speed differences per faster font Count Percent Delta (WPM) Bionic 998 52% 35 Non-Bionic 918 48% 43

> The number of people who read faster with Bionic Reading was slightly greater (52%) than the number of people who read faster without Bionic Reading (48%). That said, those who read faster with Bionic Reading only picked up 35 words per minute on average. In contrast, those who read faster without Bionic Reading picked up 43 words per minute. It does not appear that when Bionic Reading works, it really works.


Yes, but they’re still averaging the results in the “it worked” category. Hypothetically, if it worked very well for 5 users and only a little for 95 users, an average score over all 100 will make it appear to not work for anyone. I’m not going to argue this is definitely the case - just that this analysis doesn’t account for the possibility, and it wouldn’t be unexpected in the area of reading ability, where one would expect individual differences to be important.


> the effect is generally quite small, but large for some minority of users - at least one user read 293 WPM faster with Bionic Reading!

Well, that's not really something you'd want to admit in your paper, much less advance as an argument. The obvious conclusion is that there's a mistake in your data, not that some users will see an increase in reading speed of 4.9 words per second.


Reminds me of The End of Average, [1] a book about human variation. It talks about how the Air Force tried to design a cockpit by averaging various measurements, only to find that their pilots weren't actually average.

Your point is especially apt here, where there is a possibility that the technology could be helpful for a subpopulation, and act somewhat like an assistive technology. It is very tricky to assess the utility of assistive technologies (I work in this field and have asked many experts how they do it) because the average impact isn't the most important thing. It doesn't matter if high-contrast mode is bad for 90% of people, if for the 10% who would actually use it, it's helpful.

One way that the study could have been optimized is if participants had been asked (after taking the test, to avoid priming) if they have any specific reading challenges. That would have helped identify whether there appear to be any subgroups that disproportionately benefit from the approach.

1: https://www.amazon.com/End-Average-Succeed-Values-Sameness/d...


I think the idea here isn't that we're treating all brains the same, but that the deviations represent statistical noise/outliers rather than an actual effect fron the bionic intervention.

I agree though, it would have been an interesting follow-up if they asked only people who felt they benefit from it, and then conducted their analysis again on that sample!


> we intend to run some more tests in the hopes of discovering a screen reading technique that yields material benefits. We're aware of a couple other technologies that seem interesting including BeeLine Reader, Spritz, and Sans Forgetica.

BeeLine founder here — feel free to reach out prior to your testing (nick@[domain]). We can share info on past testing and implementation techniques.

We've wondered about Sans Forgetica in the past, which may increase reading comprehension (at the expense of speed). It might be interesting to try a BeeLine/Forgetica test, to try to get the best of both worlds!


Coincidentally I was wondering a similar thing this morning. To get as much text as possible on paper for holiday reading I managed to get the text down 30% while by stripping out any vowel but the first in a word, replacing 'the' with '#', 'and' with '&' and of' with '%'. I also used the smallest Tahoma font and multiple columns. This increases reading complexity just as Sans Forgetica, but perhaps also increases reading speed somewhat- because words get stripped down to their 'essence' (perhaps similar to chinese or japanese script) and your eyes move less with multiple columns. In that 'word essence'sence this technique has similarities to Bionic Reading. Combining the two would be tricky, as what letters would be bolded with the devowel method and can you get closer to a word essense? The combination with Beeline and Bionic Reading or devoweling could have more potential. It's nice that Beeline has gray scale support.

sed 's/ and / \& /g' -e 's/ of /\ % /g' -e 's/ is / \= /g' -e 's/ th / \# /g' -e 's/\B[aeiouAEIOU]//g' -e 's/\(.\)\1/\1/g' -e "s/'//g" sacred-world.txt > sw.txt



Thanks for the info — I’d been hopeful about Forgetica based on some general research about disfluent fonts and memory. But I guess things don’t look so rosy for this particular one, at least based on this one study. Thanks for sharing!


nice to meet you, Nick! shooting you an email now :)


Hey everyone, this is the follow up post to the experiment posted here on Hacker News a few weeks ago: https://news.ycombinator.com/item?id=31826204.

I'll be hanging out in the replies today if there are any questions I can answer :)


I stalked the post's author (your?) LinkedIn and didn't see any background in statistics. How do you know to design this experiment so nicely?


It is a pretty standard statistical test taught in any bachelor level class covering statistics (or earlier in Germany we did this in 11 the grade) . This setup also includes https://en.wikipedia.org/wiki/Blocking_(statistics) another pretty standard method you learn by just looking how others do experiments.


I've said this via Twitter and I'll say it now: I'd expect there to be huge learning effect sizes for anything that alters the way we parse information, even in such "minor" ways as BR.

I'd like to see what the difference would be like after a day, four, and a week of reading articles with BR, if that's viable to enlist people for at all.


Excellent experimental design and analysis!

Is there a way for me to buy you a coffee (or some cash equivalent)? I'd love to incentivize such rigor in online discourse.

An obvious way is to try out readwise ;) but I mostly read online articles and (pirated) books, so I'm not sure I'm a good use case.


these might be the kindest words i've ever gotten in a hacker news thread so thank you! :D

we've actually built an app for reading articles and ebooks (and RSS and PDFs and email newsletters and Twitter threads): https://readwise.io/read

still in private beta but we'll be entering public beta before summer is over!


So Bionic reading is just a form of speed reading?

And it turns out that comprehension is suffering from trying to read faster... I think that most people, when they read, do a sort of automatic "speed reading" in their mind already.

What I mean is: I notice that when I read, my eyes will skip parts of words where I don't need to see all the letters to know what word it is. What if all this trickery simply hinders our innate and built-in "automatic speed reading" capability?


My best interpretation of the test results is that Bionic Reading is just an exotic text styling that has no impact -- positive or negative -- on reading speed or reading comprehension.

If it were a form of speed reading, we would have observed faster speeds when using Bionic Reading (and probably lower comprehension as a trade-off). This is NOT what we observed.


I thought so too, but I've noticed in some "pair-programming-like" experiences that some people waste so much time reading posts and SO comments/threads that are obvious dead ends, and take too long to jump to the relevant section in logs or docs... So I think most people don't do speed reading at all


Having just tried it, I definitely felt like the bionic reading was completely unreadable. I am very much a skimmer, and to be fair, I haven't practiced with Bionic so it might get faster over time. But my initial reaction was honestly disgust.


More or less, but the creator is much more interested in patents and trademarks and selling API access for mind-boggling cost. It's obvious it's just snake oil.


It definitely feels faster to read bionic and almost, dare I say it, pleasurable


Note there is rapid eye movement you’re not conscious of


What about instead an AI project that rewrites content for fastest consumption and easiest comprehension?

Not a summary, but kind of like a lossless re-encoding for maximum human efficiency.

You could even allow custom reader profiles that take into account things like vocabulary an subject expertise.


So, the gist bot that some subreddits already use? It has a different name than gist which I cannot recall, but probably an ML algorithm.



I was specifically proposing not a summary, because I know summaries are already done and they have their purpose.

But for people who want to read faster in a lossless way this wouldn’t apply.


Not the same, it just summarizes.


Oh, true, that even works with german texts, so the TLDR-bot has probably limited language comprehension and does not rephrase things.


Hopefully in the future, language models will be able to read books for us and summarize whether or not they’re worthwhile. Bumping from 200 wpm to 300 wpm doesn’t put a dent in the 100+ million books circulating in print.

Unrelated Readwise questions (love the work y’all do, thank you!) - any idea when the chrome app will support multiple highlights in one go? - is there a random quote API that references all sources at once? - has the team considered adding semantic search to the app?


We already have rating systems and reviews that are human-generated and human-curated to help you decide if a book is "worthwhile". How could an AI language model improve on that? In other words, why would you trust AI more than human curators/reviewers?


An AI could tailor it to the person, put it in context of other things already read, and essentially create a course of study for you. Humans can do the ~same, but it's pretty expensive.

An AI could also improve on the status quo by being more consistent and there's _some_ possibility that it could be less biased than in human in several ways.


hey there! thanks for the kind words :)

on AI-based booked summarization, here's a really interesting article by OpenAI on the topic: https://openai.com/blog/summarizing-books/

*any idea when the chrome app will support multiple highlights in one go?* we have a new browser extension (yet unreleased) that enables you to highlight the native page (i think that's what you're asking)

*is there a random quote API that references all sources at once?* at the individual user level, yes. readwise.io/api_deets

*has the team considered adding semantic search to the app?* not yet, but my cofounder tristan did push a huge full-text search update a few months ago that makes search results 10x better than they were before


Different people will have different opinions on which parts and aspects of a book are worthwhile or not. Books that were worthwhile to me ten years ago may not be worthwhile to me today, or in ten years, and vice versa. I don’t expect AI to be able to adequately address that anytime soon.


Sounds like a book review.


There's no shortcuts to anything in life. Thinking so is a fool's errand.

You get better at reading by reading and deliberately practicing it. Speed reading is a completely different skill than comprehension. Comprehension is a completely different skill than entertaining yourself. The fundamentals aren't going anywhere. They'll be here when you realize speedreading, bionic reading, and summaries are just distractions.


This is trivially disprovable; it's easy to think of longcuts, ways to accomplish or learn something inefficiently.

There would be no difference between a great teacher and a terrible teacher otherwise. Or a great coach and a terrible coach.


The entire software industry is based on creating new efficiencies... On top of all the stuff you said I just... don't see how anyone could believe that.


I've always thought research papers were a longcut to comprehension.

I don't know if this is because the subject matter is novel, or that the authors are better at math and science than writing.

I wonder if there could be a standard for grading a research paper on readability to raise the status quo.


Unfortunately, readability is at odds with other, more important goals of the authors, like shoe-horning in references to all important papers in the field, especially if they're likely to be written by reviewers.

Also, a lot of work must be shown to demonstrate completeness and rigorousness, and that doesn't help comprehension either.

The trick is to read only the segments of the paper that are actually worth reading, which is often only the abstract, conclusion and maybe implementation details.


I use text to speech… it absolutely is a short cut and without and doubt it improves my reading comprehension, stamina, and speed..

I can even look away from the screen and still follow along.

It has changed my life and introduced me to so much more information that I would have otherwise not attained with manual reading.

Eye scanning and mental vocalization are really tiring on my brain. Text to speech has solved both of these problems.

I think there’s something about adding a new dimension (auditory) that also helps with memorizing and comprehension. It adds more data points for my Bayesian brain to use and associate with.


It's the opposite for me. I can't listen to things at anything like my comfortable reading speed. The move of so much internet content from text to video has been a disaster for me.


Depends on your style of synthesis I think. I need to think and establish axioms in my head after every two paragraphs so I pause a lot. Going over something again on TTS puts me to sleep.


For me, listening to audiobooks was a game changer. I guess TTS is even better because you can listen to pretty much everything.

Any app/software you can recommend?


I use NaturalReader paid subscription with high quality TTS voices for web browsing and the free iOS version of Voice Dream reader for ebooks and mobile browsing.


My retention from listening to text is a tiny fraction of my retention from seeing written words and numbers.


I follow along with written words and listen at the same time. The apps I use automatically scroll the page and keep the current text in focus.


What apps are you using?


Pretty much all human inventions are short cuts.

Wheels, cars, computer etc.


the point of reading is to expand your mind, have new experiences, and have new thoughts. Shortcutting that is probably not what you want to do. In fact that applies also to computers and cars. James P. Carse:

"Morever, machinery is veiling. It is a way of hiding our inaction from ourselves under what appear to be actions of great effectiveness. We persuade ourselves that, comfortably seated behind the wheels of our autos, shielded from every unpleasant change of weather, and raising or lowering our foot an inch or two, we have actually traveled somewhere.[...] Therefore, the importance of reducing time in travel: by arriving as quickly as possible we need not feel as though we had left at all, that neither space nor time can affect us—as though they belong to us, and not we to them. We do not go somewhere in a car, but arrive somewhere in a car. Automobiles do not make travel possible, but make it possible for us to move locations without traveling. Such movement is but a change of scenes. If effective, the machinery will see to it that we remain untouched by the elements, by other travelers, by those whose towns or lives we are traveling through. We can see without being seen, move without being touched."

the platonic ideal of speedreading: spend 1 minute on each book, read 10k books per year. The shallower the better, complications might negatively impact reading speed.


No surprise. It's been shown over and over that speed reading, no matter the gimmick, doesn't work. It is just another one of those self help fads that never goes away because it is super easy to re-package and sell. Just read without thinking about how fast you are reading and you'll do fine.


Have we isolated the writing style with these experiments?

I have no trouble believing that speed reading concise, efficient writers represents a loss of information. But some writers just will not get to the fucking point, and those were the books/classes where I was a little jealous of the speed readers. I need to read and retain about 10% of this garbage, but my brain is not cooperating on skimming.


Did you speed read this article? They weren't measuring whether speed reading itself works, rather whether bionic reading works for speed reading.


For me, it was a little surprising that there was no effect at all. For whatever reason it felt like it ought to have an impact.

However, if I think about it, reading has been done for thousands of years and by now something like that would have prevailed if it was effective.

Anyway, we need more experiments like this. I bet there are a ton of things out there that we think make out lives better, but in reality don't do a thing.


thanks rasul!

on an individual basis, there is some evidence that reading with your "best" font may have a meaningful impact on reading speed with no loss of comprehension. but you could probably unlock these gains by simply switching from, say, arial to garamond (i made that up for illustrative purposes), rather than implementing this complex font style.


I think what would be most interesting and helpful is an augmented form of reading that is semantic. I would love to have bold and italics and even section headings and table of contents that are toggleable on and off as I read anything, to quickly skim and focus on the most relevant content.

I can also imagine a version of this which is contextual, based off a query, or a personalized recommendation system.

I spend much more time skimming than reading, and skimming to determine if something is worth reading. Anything that can support that is incredibly valuable and would increase my functional reading speed for accomplishing tasks.


> there's no universal best font […] different fonts increase reading speed for different individuals

My takeaway is that system fonts, reader mode fonts, etc., should always be configurable.


If you want to read fast write a little tool to do "Rapid Serial Visual Presentation".

The basic idea is that you flash words on a screen in the same location so that your eyes don't have to pan back and forth, you just look straight ahead.

Your eyes and brain can recognize whole words as a gestalt (without reading each letter.)

That's it.

This method cuts out most of the physical overhead of reading.

When I played with it I got to the point where I could read over 500 words/minute. I could read faster than my internal voice could speak.


The whole thing was a hilariously transparent marketing strategy to begin with for their stupidly expensive (and tracks-everything-you-read) API.


I don't think so, and I have 0 interest in any of these companies -- I did find it notable they were checking a company's claim, on a company blog, but I also found it notable there was 0 mention of how this related to their company


Sorry, don't think I was clear. I don't mean this study, I mean Bionic Reading and their virality in the first place.


Thanks for posting this - I appreciate the thorough analysis, as well as complete transparency around methods and data.


thanks marc! appreciate the kindness.

ps- philly native here :)


I'm not surprised that this slows down most people. But I'd be curious about a more targeted audience of people with known reading problems: people with dyslexia, for example. Or even regardless of speed, does it help overall with people who have comprehension problems with written communication?


i got that feedback a lot after initially posting the experiment:

> Since posting this experiment, I've received a lot of side comments along the lines of, "Well, of course I don't expect Bionic Reading to work for most people, but for [my subpopulation], it really works." If that were the case, we might expect to see disproportionate benefits for those participants who read faster with Bionic Reading than for those who read faster without Bionic Reading. Let's look at how many participants read faster with each font and their average speed gains.

there might be some specialized effect on folks with known reading problems, but bionic reading makes no claims to specifically work for those people but rather the whole population so that's what we decided to test.


> Let's look at how many participants read faster with each font and their average speed gains.

“Average speed gains” doesn’t seem like the best metric here. A graph showing the distribution of users across different WPM differences would be much more informative. Sure, the average user doesn’t see outsized results, but are you saying that zero participants in your test read substantially faster with BR?


It would be interesting to ask people after the test:

Have you been diagnosed with, or do you believe you have, dyslexia, ADHD, visual impairment, or other reading challenges?

It would also be interesting to ask people if they felt it was easier to read with the new tech or the old way. Speed is one metric, but subjective impressions of reading ease are also relevant.


we were testing bionic reading’s claim. they make no claims to only working when reading challenges are present.


This fits with my experience so I believe it!

That said, I did feel like the bionic rendering made me more likely to read every word as opposed to skimming sentences. It also made reading feel more "percussive" which can be fun. Definitely an interesting line of inquiry even if it's over-hyped.


What I would really like to instead of methods that help me speed up my reading is something that has a model of my knowledge and can summarize information to the points I don't know yet and are relevant to me.

Does someone know if that exists or if there is research on it?


If your reading speed is the limiting reagent for your understanding a piece of writing, you should read something else. It's like if the limiting factor in your programming speed is typing. The thing to fix is probably not your typing speed.


I'm not surprised at all.

Maybe I'm just older, have seen more, and have a better tuned BS meter.


This is just the 2022 version of Spritz(2014). Before that...

SQ4R - Survey/question/read/recite/reflect/review

Gummy bear trick - Place candy/gummy bears on the pages and eat them as you read them.

SparkNotes - Read someone else's summary of what you need to absorb.


What if the fixation points are not on every word but create a narrower space by being a couple inches in from either side of each line to use the classic speedreading trick of using peripheral vision? Has it been tried?


Completely unsurprising. The bottleneck in reading is not visual recognition of words or eye movements. Source: decades of actual research on this topic.


We'll figure something out though. It was big news about ten years back in sports medicine that cooling off the lower arms was unreasonably effective for recovery, even, IIRC, better than a full body ice bath.

I don't know if they still do it, but at the time they were showing off these weird baggies you put over your arms like oversized mittens and ran coolant through.

We know sleeping helps with memory retention. We know that NSAIDs influence experiences of emotional pain. What other folk remedies and completely random things also work on memory? I forget, is caffeine now established wisdom or do they keep going back and forth on that?


Headline: Does Bionic Reading actually work? We timed over 2,000 readers and the results might surprise you

First line: tl;dr. Actually no, the results will probably not surprise you

fuck this


I find it increadibly hard to read, so I really hope it doesn't become a common thing. So I am kinda happy about these results.


Can you show the distribution of reading speeds. If this font has a multimodal effect then the t-test mightn't be appropriate.


i made the dataset publicly available. all the distributions are approximately normal.


my reading bottleneck is understanding and contemplation, not eye motion. for that bionic reading seems useless


> tl;dr. Actually no, the results will probably not surprise you. After analyzing data from 2,074 testers, we found no evidence that Bionic Reading has any positive effect on reading speed.

I've seen a lot of viral social media posts about bionic reading recently and this is the very first time I've ever seen anyone mention reading speed. Everything I've seen is selling bionic reading for greater reading comprehension and focus. Never mentions speed.

Granted the article measures reading comprehension too (though I'd have some doubts about the methodology of 3 MCQs on a PG article - this part of the test didn't seem high on the author's priority list).

Did the authors just waste a lot of their time because they didn't pay attention to the claims or are we just in very different bubbles?


giant hero section on bionic-reading.com:

"Did you know that your brain reads faster than your eye?"

title of bionic-reading.com:

"Faster. Better. More focused. Reading."

does "faster" not refer to speed to you or something?


I'd heard of Renato but not the registered trademark (why don't they use the font throughout the website)?

I guess that answers my question: different bubbles


Meta comment: I'd appreciate if we could adopt the 'academia style' for headlines (ie tell the results right away) instead of the 'clickbait style' (click here to find out).

So in this example: "Bionic reading does not change reading speed: results from our experiment" instead of "Does Bionic reading work? ...".


I really hope Dang or one of the others in charge here is reading this and is willing to consider it.

The change suggested here would be one of the greatest single improvements to the signal-to-noise ratio on this site, and even news in general.

It would be a policy change rather than a tech change really, and would need some enforcement. Just a reminder on the submission page, and perhaps a banner at the top of the home page highlighting the policy change, and directing people to flag titles in the wrong form.

(Or, failing that, does anyone have any suggestions for good tech news sources where the titles are in this format, this is definitely something I want to see)


It would also be easy to prompt you to consider rewording if your title has a question mark. There would be some false positives, but I'd think the vast majority would benefit from rewording.


Yes. We've changed the title to a representative sentence from the first paragraph.


I read once that a headline that ends in a question mark is almost always a no. I think the thought is that positive answers to a question are shared as a statement. Much like you shared. It saves me a few clicks?


You ended with a question mark, so I must assume it doesn't save you a few clicks.

I'll take my downvotes now.


Why they didn't test for comprehension of the text?

This is a huge flaw of this study



Perhaps you did not comprehend the writeup, which describes this aspect of their analysis in some detail.


stopped reading at 'and the results might surprise you'

if you want 'hackers' to read something, best not start with the same line as every ultra-low-quality invasive clickbait ad


should have continued reading. they follow with "not so surprising".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: