can we kill this entire fad of speed reading and productivity life hack nonsense already? I propose we start to advocate slow reading in the spirit of Peter Norvig's learn programming in 10 years, and tell people to actually enjoy what it is they're reading.
I disagree. There are things I want to learn and I really don't care for the fluff.
Ironically (or not?) self-help books are absolutely full of fluff but at their core can have very useful/helpful ideas. They just get expounded upon back up with 10 specific examples where each example starts at the beginning of the person's life in full detail.
Speed reading has helped me get through those books and learn something where I would have wasted so much time or actually never even picked the book up.
> Ironically (or not?) self-help books are absolutely full of fluff but at their core can have very useful/helpful ideas. They just get expounded upon back up with 10 specific examples where each example starts at the beginning of the person's life in full detail.
That's not fluff. It's an artifact of people being different and the goal of the book being to connect with someone and make an impact. A list of pithy principles would have no impact and make no connection. The exposition can do that (e.g. relatable story, example of an application that's close enough to the reader's circumstances), but not all of them will connect with every person.
Maybe "no impact" was overstating it a little bit, but I think the point still stands. They often have far less impact when they're not reinforced by the right story or example, but that story or example won't be the same for everyone.
How could an author who knows nothing about you possibly be better than yourself at construing an example applicable to you?
It then seems to me these kinds of books must be made for people who want others to think for them.
So maybe all these books should say is: "Think. Your problem is a lack of aforementioned activity. If you need some food for thought here's a list of ten pithy principles. Flip to page two for an afterword by my publisher."
Of course that wouldn't be good business sense. Why show someone the spring when you can sell them water?
> How could an author who knows nothing about you possibly be better than yourself at construing an example applicable to you?
He can't. The author can't predict which story or example will connect with you, so he includes many that connected with someone, with the hope that some fraction will connect with any given person. That's why people complain about "fluff": they're annoyed but the stuff that doesn't connect with them, but they haven't thought about it beyond their own personal experience. Maybe they could redact the book down to 10 pages, but so can everyone else, and all the redactions would be different.
> It then seems to me these kinds of books must be made for people who want others to think for them.
You're being uncharitable and kind of conceited. Would you say Calculus textbooks are for people who want others to think for them? Do Real Men take a short primer on mathematical logic and the axioms of ZFC set theory, and go derive Calculus for themselves?
> they're annoyed but the stuff that doesn't connect with them, but they haven't thought about it beyond their own personal experience.
Emphasis mine.
> Would you say Calculus textbooks are for people who want others to think for them? Do Real Men take a short primer on mathematical logic and the axioms of ZFC set theory, and go derive Calculus for themselves?
My response was argumentum ad absurdum, so I gain nothing from defending the position, but still:
Walter Rudin's Principles Of Mathematical Analysis would be a terrible book if it tried to relate the matter's purpose at every step. Nobody expects a mathematical textbook to do that.
In fact Principles Of Mathematical Analysis is a great book for being extremely concise and containing just what is necessary for a reasonably intelligent reader to understand the material.
To be more clear, I don't believe that your explanation could possibly be a good reason for these stories being included in a self help book, but without precluding they may serve another purpose.
> ...just what is necessary for a reasonably intelligent reader to understand the material.
What about readers who aren't "reasonably intelligent" (which often means "quite a bit more intelligent than the typical person")?
> To be more clear, I don't believe that your explanation could possibly be a good reason for these stories being included in a self help book, but without precluding they may serve another purpose.
What's the reason for your emphasis there? Are you reading the "self" part too literally or idiosyncratically? IIRC, the "self" just means the book is meant to help the reader with his problem without personal guidance from some professional. It doesn't mean the reader is supposed to figure it out on his own.
As I've gotten older, I've gotten more wary of certain biases that engineer-types often tend to indulge in. One of them is along the lines of "I'm so smart, I think I can figure it out on my own, therefore everything I think I don't need is unnecessary." Another is temptation to confirm one's intelligence by seeing the "real" reason as some cynical ploy that works on lesser people.
Also, I'm not saying every self-help book is good, or that it never happens that examples are truly just padding. It's just that there's good, non-cynical reasons to not to reduce everything down to some pithy list of axioms, and I know for a fact that at least one well-regarded one is structured that way, and it was a bit of a slog because of all the examples and stuff that didn't connect with my particular circumstance (but there's no way for someone I never met writing a couple years before my birth to tailor anything to me).
Agree. Even beyond enjoyment I’ve heard multiple very accomplished people in a variety of fields (from theology to math) say in interviews “I’m actually a very slow reader.”
So sure reading fast would be nice, but I think that’s mostly biological whereas slow quality reading you have full control of and we need to focus on that instead.
Ok, but… I love being able to speed read. Love it. And I can’t speed read many things, like philosophy or “zorba the Greek.” Lol just kidding I totally speed read that shit (but slow down when necessary)
“Some books are to be tasted, others to be swallowed, and some few to be chewed and digested; that is, some books are to be read only in parts; others to be read, but not curiously; and some few are to be read wholly, and with diligence and attention.”
Well there is some stuff you want to enjoy and there's other stuff you want to process as fast as possible so you get more time to read the stuff you enjoy :)
It's been increasingly pointed out to me that my fiction reading habits, which permit enjoyful re-re-re-reading (the constant surprise of things missed each prior reading) are anti-patterns for legal or technical writing: in these cases reading deep, slow, is far better.
There's nothing wrong with re-reading per se. It's about context.
> in the spirit of Peter Norvig's learn programming in 10 years, and tell people to actually enjoy what it is they're reading.
Not everything should be enjoyed though and that needs to be better known lest people feel they’re doing it wrong when reading skimmable material back to back.
One of the issues with tests like these is that the companies sponsor the research and they are one-off vs long exposure studies; that is as with most interfaces with high-throughput, takes time for the mind to adjust.
My guess that given human speech appears to have a universal transmission rate of around 39-bits per second, that on average that’s going to be the actual performance target:
I am definitely not blind. But i listen at high speed to audio books and podcast all day at work, I average at around 2.5x/2.8x speed (lower for certain narrators and can get up to 3x + speeds if i am not doing something that requires much of the language processing part of my brain or anything mentally taxing) i could drive or play a non-text/plot heavy video game but not talk or problem solve.
While I agree both the blind and others, including myself, listen to screen readers and audio at a higher rate, that’s not actually research that’s reviewable and shareable. Speaking for myself, 100% sure the noise-to-signal ratio increases when I do, but if needed, I just go back X-seconds in time to relisten to prior audio. I for sure never literally test my comprehension systematically when doing so relative to when I am not. The speakers, vocabulary, topic familiarity, etc — also make a huge difference; aka prior familiarity in general with the input.
On the flip side, I provided research on transmission rates, which to me seems reasonable, but another user shared research on reception rates, which to me is unreasonable:
Not blind, but I often watch youtube videos from 2x - 4x speed, only going below 2x to review complicated information (pausing a tutorial to read the text/code/image on the screen) or if the timing of information is part of the information (music, comedic timing, etc...)
I think this is in jest, but to answer seriously: No, I think that's actually backwards, trying to make a language more "packed" will harm information flow.
Evidence suggests the real limit is how quickly human brains can take ideas/qualia, convert them into abstractions, and encode the abstractions into language. This is because (A) very different languages still exhibit similar limits and (B) those limits appear to be governed by the sending-side. People can comprehend spoken words at a higher rate than they can spontaneously emit them.
So trying to make the language more "compact" would likely just waste precious brain-cycles on the compression step, which isn't actually necessary when your mouth already supports talking faster.
Technology analogy: Two computers are collaboratively solving a problem with back-and-forth messages. The network connection is actually very good, however the bottleneck is the CPU in each computer. Will the problem be solved faster if you change the transmission style from plaintext to gzipped?
The other problem with this idea is that natural languages contain quite a bit of redundancy, which is why speech and writing can be decoded even over very noisy channels. It’s hard to make language more compact without removing some redundancy.
A related rant I've had over the years: the redundancy present in human language is not a flaw to be optimized away. Evolution went there because it acts as a form of forward error correction.
> People can comprehend spoken words at a higher rate than they can spontaneously emit them.
Like so many things with people, it depends on what they practice. Most people practice listening far more than rapid speech. Some people can speak faster than most people can comprehend. Part of learning to do that may involve not listening (to themselves) in the same way.
> So trying to make the language more "compact" would likely just waste precious brain-cycles on the compression step
If you learned this new more compact language as your native language or to native fluency, I don't imagine you would need to go through a compression step, since all of your thoughts would already have happened in the compacted form.
That raises the question of why it seemingly hasn't happened already.
One option is that what you're describing just isn't possible, that humans are already butting up against some kind of limit which is not avoidable simply by being raised with a different language.
Another option is that one can be raised to think in "pre-compressed" structures, but nobody does because it's a bad tradeoff, dropping "general thinking" performance with a worse impact than any "faster speaking" benefits. (Such as being simply slower, or more error-prone, or more demanding on attentional resources, etc.)
A Large Inclusive Study of Human Listening Rates [1] focused on finding optimal generated-speech rates for screen-reader software:
> The mean Listening Rate was 56.8, which corresponds to 309 WPM. Given that people typically speak at a rate of 120-180 WPM, these results suggest that many people, if not most, can understand speech signifcantly [sic] faster than today’s conversational agents with typical human speaking rates.
While there was some difference between the visually-impaired and normal-vision respondents, I don't think it's enough to matter to the thesis of my post:
> [...] The mean Listening Rate for visually impaired participants was 60.6 (334 WPM) while for sighted participants it was 55.1 (297 WPM).
Thanks, reviewed research, unclear though how a synthetic voice reading off a single word per unit of measure is an accurate gauge of average listening rates.
From page 4 of 12 of the PDF you linked to, “Rhyme test: measures word recognition by playing a single recorded word, and asking the participant to identify it from a list of six rhyming options (e.g., went, sent, bent, dent, tent, rent). We used 50 sets of rhyming words (300 words total), taken from the Modifed Rhyme Test [27], a standard test used to evaluate auditory comprehension.” The word list research used is on page 30-31 of 55 of this PDF:
If true, test appears to not even measuring word recognition, it’s more accurately measuring a single phoneme recognition. If the listener correct picks the correct phoneme from a multi-choice list, researchers assume person would hear and understand 100% of any expressions received at that rate of speed; which in my opinion is clearly flawed.
I would be the first to agree that testing listening comprehension rates is hard to do, hence why I asked to review research, but to me, unless I am misunderstanding something, unclear how this research actually provides any meaningful observations.
I know you're probably joking, but for those out of the loop, the result your parent was referring to is that more verbose languages have faster speakers and the two effects compensate one another pretty well.
No, latin is the only language we'll ever need! It is much closer to the fundamental truth, unlike these modern languages that make unnecessary abstractions of what is really going on at the low level.
The article’s conclusion doesn’t seem to match the data they collected. They found that most people (52%) do read faster with bionic reading, the effect is generally quite small, but large for some minority of users - at least one user read 293 WPM faster with Bionic Reading!
The authors then average results (good and bad) across all users, resulting in a number close to zero, and conclude Bionic Reading doesn’t work for anyone, even calling it a placebo effect at the end.
The problem is, all brains are not the same. It doesn’t have to work equally well for everyone to be valuable.
People collect all sorts of fascinating data but then analyze it so superficially, throwing very low-hanging fruit away without a thought. The biggest loss is information about individual variation.
Why do we only crunch data down to averages? Differences in group variation can be measured just as rigorously as differences in group averages. Repeatedly testing an unusual individual is perfectly legitimate and scientifically interesting. There is no Law of Science that says you have to execute a boring protocol that rigidly assumes everyone is the same.
There's a classic story about how the Air Force learned that no one is average, and switched from fixed "average" cockpit fittings to adjustable ones, greatly reducing accidents. But it seems like no one has learned from this.
For heaven's sake, let's try to learn about individuals and what works for them!
The results are what you would expect if you measured two identical fonts, ie random.
You can make the _claim_ in your comment, but there isn't statistical evidence that this effect is more than random, or if the same people were tested again their results wouldn't be opposite. Put another way, it doesn't pass the null hypothesis.
I think the best approach would be a graph showing the distribution of users across different WPM differences, and if possible a second experiment with two non-BR fonts (to see how much random variation is expected). The positive results might all be noise, but I don’t think that should be the default assumption.
FWIW, the article does include some data on the distribution, as well as address your exact objection:
> Since posting this experiment, I've received a lot of side comments along the lines of, "Well, of course I don't expect Bionic Reading to work for most people, but for [my subpopulation], it really works." If that were the case, we might expect to see disproportionate benefits for those participants who read faster with Bionic Reading than for those who read faster without Bionic Reading. Let's look at how many participants read faster with each font and their average speed gains.
Table 3: Summary speed differences per faster font
Count Percent Delta (WPM)
Bionic 998 52% 35
Non-Bionic 918 48% 43
> The number of people who read faster with Bionic Reading was slightly greater (52%) than the number of people who read faster without Bionic Reading (48%). That said, those who read faster with Bionic Reading only picked up 35 words per minute on average. In contrast, those who read faster without Bionic Reading picked up 43 words per minute. It does not appear that when Bionic Reading works, it really works.
Yes, but they’re still averaging the results in the “it worked” category. Hypothetically, if it worked very well for 5 users and only a little for 95 users, an average score over all 100 will make it appear to not work for anyone. I’m not going to argue this is definitely the case - just that this analysis doesn’t account for the possibility, and it wouldn’t be unexpected in the area of reading ability, where one would expect individual differences to be important.
> the effect is generally quite small, but large for some minority of users - at least one user read 293 WPM faster with Bionic Reading!
Well, that's not really something you'd want to admit in your paper, much less advance as an argument. The obvious conclusion is that there's a mistake in your data, not that some users will see an increase in reading speed of 4.9 words per second.
Reminds me of The End of Average, [1] a book about human variation. It talks about how the Air Force tried to design a cockpit by averaging various measurements, only to find that their pilots weren't actually average.
Your point is especially apt here, where there is a possibility that the technology could be helpful for a subpopulation, and act somewhat like an assistive technology. It is very tricky to assess the utility of assistive technologies (I work in this field and have asked many experts how they do it) because the average impact isn't the most important thing. It doesn't matter if high-contrast mode is bad for 90% of people, if for the 10% who would actually use it, it's helpful.
One way that the study could have been optimized is if participants had been asked (after taking the test, to avoid priming) if they have any specific reading challenges. That would have helped identify whether there appear to be any subgroups that disproportionately benefit from the approach.
I think the idea here isn't that we're treating all brains the same, but that the deviations represent statistical noise/outliers rather than an actual effect fron the bionic intervention.
I agree though, it would have been an interesting follow-up if they asked only people who felt they benefit from it, and then conducted their analysis again on that sample!
> we intend to run some more tests in the hopes of discovering a screen reading technique that yields material benefits. We're aware of a couple other technologies that seem interesting including BeeLine Reader, Spritz, and Sans Forgetica.
BeeLine founder here — feel free to reach out prior to your testing (nick@[domain]). We can share info on past testing and implementation techniques.
We've wondered about Sans Forgetica in the past, which may increase reading comprehension (at the expense of speed). It might be interesting to try a BeeLine/Forgetica test, to try to get the best of both worlds!
Coincidentally I was wondering a similar thing this morning. To get as much text as possible on paper for holiday reading I managed to get the text down 30% while by stripping out any vowel but the first in a word, replacing 'the' with '#', 'and' with '&' and of' with '%'. I also used the smallest Tahoma font and multiple columns.
This increases reading complexity just as Sans Forgetica, but perhaps also increases reading speed somewhat- because words get stripped down to their 'essence' (perhaps similar to chinese or japanese script) and your eyes move less with multiple columns.
In that 'word essence'sence this technique has similarities to Bionic Reading. Combining the two would be tricky, as what letters would be bolded with the devowel method and can you get closer to a word essense?
The combination with Beeline and Bionic Reading or devoweling could have more potential. It's nice that Beeline has gray scale support.
sed 's/ and / \& /g' -e 's/ of /\ % /g' -e 's/ is / \= /g' -e 's/ th / \# /g' -e 's/\B[aeiouAEIOU]//g' -e 's/\(.\)\1/\1/g' -e "s/'//g" sacred-world.txt > sw.txt
Thanks for the info — I’d been hopeful about Forgetica based on some general research about disfluent fonts and memory. But I guess things don’t look so rosy for this particular one, at least based on this one study. Thanks for sharing!
It is a pretty standard statistical test taught in any bachelor level class covering statistics (or earlier in Germany we did this in 11 the grade) . This setup also includes https://en.wikipedia.org/wiki/Blocking_(statistics) another pretty standard method you learn by just looking how others do experiments.
I've said this via Twitter and I'll say it now: I'd expect there to be huge learning effect sizes for anything that alters the way we parse information, even in such "minor" ways as BR.
I'd like to see what the difference would be like after a day, four, and a week of reading articles with BR, if that's viable to enlist people for at all.
So Bionic reading is just a form of speed reading?
And it turns out that comprehension is suffering from trying to read faster... I think that most people, when they read, do a sort of automatic "speed reading" in their mind already.
What I mean is: I notice that when I read, my eyes will skip parts of words where I don't need to see all the letters to know what word it is. What if all this trickery simply hinders our innate and built-in "automatic speed reading" capability?
My best interpretation of the test results is that Bionic Reading is just an exotic text styling that has no impact -- positive or negative -- on reading speed or reading comprehension.
If it were a form of speed reading, we would have observed faster speeds when using Bionic Reading (and probably lower comprehension as a trade-off). This is NOT what we observed.
I thought so too, but I've noticed in some "pair-programming-like" experiences that some people waste so much time reading posts and SO comments/threads that are obvious dead ends, and take too long to jump to the relevant section in logs or docs... So I think most people don't do speed reading at all
Having just tried it, I definitely felt like the bionic reading was completely unreadable. I am very much a skimmer, and to be fair, I haven't practiced with Bionic so it might get faster over time. But my initial reaction was honestly disgust.
More or less, but the creator is much more interested in patents and trademarks and selling API access for mind-boggling cost. It's obvious it's just snake oil.
Hopefully in the future, language models will be able to read books for us and summarize whether or not they’re worthwhile. Bumping from 200 wpm to 300 wpm doesn’t put a dent in the 100+ million books circulating in print.
Unrelated Readwise questions (love the work y’all do, thank you!)
- any idea when the chrome app will support multiple highlights in one go?
- is there a random quote API that references all sources at once?
- has the team considered adding semantic search to the app?
We already have rating systems and reviews that are human-generated and human-curated to help you decide if a book is "worthwhile". How could an AI language model improve on that? In other words, why would you trust AI more than human curators/reviewers?
An AI could tailor it to the person, put it in context of other things already read, and essentially create a course of study for you. Humans can do the ~same, but it's pretty expensive.
An AI could also improve on the status quo by being more consistent and there's _some_ possibility that it could be less biased than in human in several ways.
*any idea when the chrome app will support multiple highlights in one go?*
we have a new browser extension (yet unreleased) that enables you to highlight the native page (i think that's what you're asking)
*is there a random quote API that references all sources at once?*
at the individual user level, yes. readwise.io/api_deets
*has the team considered adding semantic search to the app?*
not yet, but my cofounder tristan did push a huge full-text search update a few months ago that makes search results 10x better than they were before
Different people will have different opinions on which parts and aspects of a book are worthwhile or not. Books that were worthwhile to me ten years ago may not be worthwhile to me today, or in ten years, and vice versa. I don’t expect AI to be able to adequately address that anytime soon.
There's no shortcuts to anything in life. Thinking so is a fool's errand.
You get better at reading by reading and deliberately practicing it. Speed reading is a completely different skill than comprehension. Comprehension is a completely different skill than entertaining yourself. The fundamentals aren't going anywhere. They'll be here when you realize speedreading, bionic reading, and summaries are just distractions.
The entire software industry is based on creating new efficiencies... On top of all the stuff you said I just... don't see how anyone could believe that.
Unfortunately, readability is at odds with other, more important goals of the authors, like shoe-horning in references to all important papers in the field, especially if they're likely to be written by reviewers.
Also, a lot of work must be shown to demonstrate completeness and rigorousness, and that doesn't help comprehension either.
The trick is to read only the segments of the paper that are actually worth reading, which is often only the abstract, conclusion and maybe implementation details.
I use text to speech… it absolutely is a short cut and without and doubt it improves my reading comprehension, stamina, and speed..
I can even look away from the screen and still follow along.
It has changed my life and introduced me to so much more information that I would have otherwise not attained with manual reading.
Eye scanning and mental vocalization are really tiring on my brain. Text to speech has solved both of these problems.
I think there’s something about adding a new dimension (auditory) that also helps with memorizing and comprehension. It adds more data points for my Bayesian brain to use and associate with.
It's the opposite for me. I can't listen to things at anything like my comfortable reading speed. The move of so much internet content from text to video has been a disaster for me.
Depends on your style of synthesis I think. I need to think and establish axioms in my head after every two paragraphs so I pause a lot. Going over something again on TTS puts me to sleep.
I use NaturalReader paid subscription with high quality TTS voices for web browsing and the free iOS version of Voice Dream reader for ebooks and mobile browsing.
the point of reading is to expand your mind, have new experiences, and have new thoughts. Shortcutting that is probably not what you want to do. In fact that applies also to computers and cars. James P. Carse:
"Morever, machinery is veiling. It is a way of hiding our inaction from ourselves under what appear to be actions of great effectiveness. We persuade ourselves that, comfortably seated behind the wheels of our autos, shielded from every unpleasant change of weather, and raising or lowering our foot an inch or two, we have actually traveled somewhere.[...] Therefore, the importance of reducing time in travel: by arriving as quickly as possible we need not feel as though we had left at all, that neither space nor time can affect us—as though they belong to us, and not we to them. We do not go somewhere in a car, but arrive somewhere in a car. Automobiles do not make travel possible, but make it possible for us to move locations without traveling. Such movement is but a change of scenes. If effective, the machinery will see to it that we remain untouched by the elements, by other travelers, by those whose towns or lives we are traveling through. We can see without being seen, move without being touched."
the platonic ideal of speedreading: spend 1 minute on each book, read 10k books per year. The shallower the better, complications might negatively impact reading speed.
No surprise. It's been shown over and over that speed reading, no matter the gimmick, doesn't work. It is just another one of those self help fads that never goes away because it is super easy to re-package and sell. Just read without thinking about how fast you are reading and you'll do fine.
Have we isolated the writing style with these experiments?
I have no trouble believing that speed reading concise, efficient writers represents a loss of information. But some writers just will not get to the fucking point, and those were the books/classes where I was a little jealous of the speed readers. I need to read and retain about 10% of this garbage, but my brain is not cooperating on skimming.
For me, it was a little surprising that there was no effect at all. For whatever reason it felt like it ought to have an impact.
However, if I think about it, reading has been done for thousands of years and by now something like that would have prevailed if it was effective.
Anyway, we need more experiments like this. I bet there are a ton of things out there that we think make out lives better, but in reality don't do a thing.
on an individual basis, there is some evidence that reading with your "best" font may have a meaningful impact on reading speed with no loss of comprehension. but you could probably unlock these gains by simply switching from, say, arial to garamond (i made that up for illustrative purposes), rather than implementing this complex font style.
I think what would be most interesting and helpful is an augmented form of reading that is semantic. I would love to have bold and italics and even section headings and table of contents that are toggleable on and off as I read anything, to quickly skim and focus on the most relevant content.
I can also imagine a version of this which is contextual, based off a query, or a personalized recommendation system.
I spend much more time skimming than reading, and skimming to determine if something is worth reading. Anything that can support that is incredibly valuable and would increase my functional reading speed for accomplishing tasks.
If you want to read fast write a little tool to do "Rapid Serial Visual Presentation".
The basic idea is that you flash words on a screen in the same location so that your eyes don't have to pan back and forth, you just look straight ahead.
Your eyes and brain can recognize whole words as a gestalt (without reading each letter.)
That's it.
This method cuts out most of the physical overhead of reading.
When I played with it I got to the point where I could read over 500 words/minute. I could read faster than my internal voice could speak.
I don't think so, and I have 0 interest in any of these companies -- I did find it notable they were checking a company's claim, on a company blog, but I also found it notable there was 0 mention of how this related to their company
I'm not surprised that this slows down most people. But I'd be curious about a more targeted audience of people with known reading problems: people with dyslexia, for example. Or even regardless of speed, does it help overall with people who have comprehension problems with written communication?
i got that feedback a lot after initially posting the experiment:
> Since posting this experiment, I've received a lot of side comments along the lines of, "Well, of course I don't expect Bionic Reading to work for most people, but for [my subpopulation], it really works." If that were the case, we might expect to see disproportionate benefits for those participants who read faster with Bionic Reading than for those who read faster without Bionic Reading. Let's look at how many participants read faster with each font and their average speed gains.
there might be some specialized effect on folks with known reading problems, but bionic reading makes no claims to specifically work for those people but rather the whole population so that's what we decided to test.
> Let's look at how many participants read faster with each font and their average speed gains.
“Average speed gains” doesn’t seem like the best metric here. A graph showing the distribution of users across different WPM differences would be much more informative. Sure, the average user doesn’t see outsized results, but are you saying that zero participants in your test read substantially faster with BR?
It would be interesting to ask people after the test:
Have you been diagnosed with, or do you believe you have, dyslexia, ADHD, visual impairment, or other reading challenges?
It would also be interesting to ask people if they felt it was easier to read with the new tech or the old way. Speed is one metric, but subjective impressions of reading ease are also relevant.
That said, I did feel like the bionic rendering made me more likely to read every word as opposed to skimming sentences. It also made reading feel more "percussive" which can be fun. Definitely an interesting line of inquiry even if it's over-hyped.
What I would really like to instead of methods that help me speed up my reading is something that has a model of my knowledge and can summarize information to the points I don't know yet and are relevant to me.
Does someone know if that exists or if there is research on it?
If your reading speed is the limiting reagent for your understanding a piece of writing, you should read something else. It's like if the limiting factor in your programming speed is typing. The thing to fix is probably not your typing speed.
What if the fixation points are not on every word but create a narrower space by being a couple inches in from either side of each line to use the classic speedreading trick of using peripheral vision? Has it been tried?
Completely unsurprising. The bottleneck in reading is not visual recognition of words or eye movements. Source: decades of actual research on this topic.
We'll figure something out though. It was big news about ten years back in sports medicine that cooling off the lower arms was unreasonably effective for recovery, even, IIRC, better than a full body ice bath.
I don't know if they still do it, but at the time they were showing off these weird baggies you put over your arms like oversized mittens and ran coolant through.
We know sleeping helps with memory retention. We know that NSAIDs influence experiences of emotional pain. What other folk remedies and completely random things also work on memory? I forget, is caffeine now established wisdom or do they keep going back and forth on that?
> tl;dr. Actually no, the results will probably not surprise you. After analyzing data from 2,074 testers, we found no evidence that Bionic Reading has any positive effect on reading speed.
I've seen a lot of viral social media posts about bionic reading recently and this is the very first time I've ever seen anyone mention reading speed. Everything I've seen is selling bionic reading for greater reading comprehension and focus. Never mentions speed.
Granted the article measures reading comprehension too (though I'd have some doubts about the methodology of 3 MCQs on a PG article - this part of the test didn't seem high on the author's priority list).
Did the authors just waste a lot of their time because they didn't pay attention to the claims or are we just in very different bubbles?
Meta comment: I'd appreciate if we could adopt the 'academia style' for headlines (ie tell the results right away) instead of the 'clickbait style' (click here to find out).
So in this example: "Bionic reading does not change reading speed: results from our experiment" instead of "Does Bionic reading work? ...".
I really hope Dang or one of the others in charge here is reading this and is willing to consider it.
The change suggested here would be one of the greatest single improvements to the signal-to-noise ratio on this site, and even news in general.
It would be a policy change rather than a tech change really, and would need some enforcement. Just a reminder on the submission page, and perhaps a banner at the top of the home page highlighting the policy change, and directing people to flag titles in the wrong form.
(Or, failing that, does anyone have any suggestions for good tech news sources where the titles are in this format, this is definitely something I want to see)
It would also be easy to prompt you to consider rewording if your title has a question mark. There would be some false positives, but I'd think the vast majority would benefit from rewording.
I read once that a headline that ends in a question mark is almost always a no. I think the thought is that positive answers to a question are shared as a statement. Much like you shared. It saves me a few clicks?