Hacker News new | past | comments | ask | show | jobs | submit login
How Might We Learn? (andymatuschak.org)
225 points by ColinWright 23 days ago | hide | past | favorite | 79 comments



It is frustrating to read through demos which feature a hypothetical AI that greatly exceeds the capacities of any actual LLM, and which does not consider the serious risks of learners getting misled by confabulations.

It is especially frustrating when I recently tested GPT-4o on a factual question and got 1000 words which were all completely wrong, including fake citations.

It is especially frustrating to read this sci-fi daydreaming after talking to a high school science teacher who was forced to use generative AI tutors in their classes this year, even though these tutors are poorly tested and seem to have a ~20% confabulation rate. This particular teacher is technically sophisticated but even they sometimes get confused and misled by the chatbot. Students don't have a chance.

I think Matuschak has valuable insights on learning in general. But it seems incomplete to go through this AI thought experiment without discussing how inadequate current AI is to the task. "Technology will get better" but what if it takes 50 years?


its getting worse. Noticably very bad.

I have started double checked with web searches more and more. It went from 100% of the time to only 20%


You have only tested the current "FREE" AIs, right? How do GPT-4 or Perplexity Pro work on the very same tests?


No, I tested the paid GPT-4 last year on similar questions (animal cognition) and it was so bad I decided it was a waste of money. I actually don't care if it's maybe gotten better in the past year, and I'm certainly not spending money to find out. Last I checked the best LLMs still have a 5-15% confabulation rate on simple document summarization. In 2023 GPT-4 had a ~75% confabulation rate on animal cognition questions, but even 5% is not reliable enough for me to want to use it.

The high school AI tutor probably wasn't using GPT-4, but the district definitely paid a lot of money for the software.

I also hate this entire argument, that AI confabulations don't matter for free products. Unreliable software like GPT-4o shouldn't be widely released to the public as a cool new tech product, and certainly not handed out for free.


CPaS (Cognitive Pollution as a Service).


Humans have been doing that for years. The AI problem is so prevalent because it seems to put a magnifying lense up to the worst portions of ourselves, namely how we process information and deceive each other. As it turns out, liars and cheats tend to build more liars and cheats, also known as "garbage in, garbage out," which leaves me scratching my head as to what anyone thought was going to happen as LLMs got more powerful. Seems like many are afraid to have that conversation, though.

I like your term for it.


It's always the same response on this website isn't it? No, GP specifically mentioned GPT-4


I have tried some chemistry problems on the latest models and they still get simple math wrong (mess up conversion between micro and milligrams for example) unless you tell them to think carefully.


I really enjoy reading Andy's ideas on education--alongside Peter Gray (a psychologist who emphasizes the importance of play for education, https://www.amazon.com/Free-Learn-Unleashing-Instinct-Self-R...) and Piotr Wozniak (invented SuperMemo, https://supermemo.guru/wiki/Main_Page), he has really shaped my perspective on learning. I actually built a last-minute YOLO application to YC on an extremely similar idea--I figure that modern LLMs are capable enough to offload most of the metacognitive aspects of learning onto. Learn drive (a Wozniak term: your natural curiosity) can take you pretty far in a subject, but it's often frustrating to find the right order to learn concepts based on your current understanding and the subject matter. I've previously scoured syllabi on the internet for this, but often what I want to learn isn't really codified in a single course.

I started building a prototype of this idea that I've been very slowly working on in my free time that indexes and uses my notes in emacs for RAG against a locally running LLM. I do think these kinds of learning LLMs have to be run locally, though I've recently gotten a little frustrated because I cannot run a capable open model without my machine's fans turning on.


If you ever plan on writing on what you're working I'd be interested. I have notes in Markdown or Org forms as well as PDF's I can convert to text easily. I'd like to be able to run a small model locally and tune it with the content i have in the future.


In my experience, some key aspects of learning are honest self-assessment (avoiding unnecessary comparisons to others) and learning to appreciate whatever you have wherever you are.

Learning music is one of the best areas to learn how to learn. When you start a new instrument or technique, you will not be good relative to experts. That's ok. You just need to focus on what you can do and build from a solid foundation. Listen closely and your weaknesses will become apparent. You can even learn to appreciate your weaknesses as they provide the opportunities for growth and development.

Unfortunately, I worry that ai tools will ultimately hinder learning as much as help (at least in the aggregate). My fear is that it will prevent people from exploring and finding their own path, passively following the track laid out by the ai. Inevitably many will compare themselves to ai and find themselves wanting instead of asking what they can do that an ai can't.

Teachers can be helpful, but you ultimately are responsible for your own education. It is one thing to follow an individual who has a proven track record, but at least at this stage, I feel like ai tools may be more pied piper than wise sage.


"My fear is that it will prevent people from exploring and finding their own path, passively following the track laid out by the ai"

My experience learning with LLM-based tools over the past ~18 months has been the opposite. The ability for me to redirect the learning path in a direction that best matches my own interests and curiosity is unparalleled. It's that ability to directly influence the learning path that makes me do optimistic about LLMs as a tool for learning.


that's been my experience, as well. yes, AI will give me the knowledge, but I still drive the conversation. if I want to dive deeper into a certain topic, or even change tracks completely, I can.


> what were the most rewarding high-growth periods of your life?

Every single one was an extremely life-threatening moment in which I very likely would (and should) have died, but for my very rapid learning. The growth came in not being a corpse.

I realize I am not the target audience of this article.


"Target audience" or not, there's a ton of truth in what you say.

Necessity and lack of control in a novel and frightening situation transforms our minds. It isn't sustainable, but it's not meant to be, because the goal is to get through it and get out, by becoming a different, better person.

To pick some slightly less dramatic examples than combat or wilderness survival;

  A person who learned a new language in 14 days because they fell in
  love with someone who spoke almost no English, but simply had to be
  with them and make things work.

  Someone who became an expert bricklayer when stuck in a remote
  village where that was the only skill they could contribute.
A fella named John Taylor Gatto [0,1] became the New York State Teacher of the Year (winning it more than once IIRC) before being fired for reckless unconventionality. He once drove a bus of school kids upstate into the wild, gave each $10 and a bottle of water, told them their assignment was to "find your way home", and drove off. Of course all the kids made it and recounted the "best ever learning experience of their lives". Today they'd sue for trauma... if they survived.

The article I just read sadly describes more scaffolding, more mollycoddling, more "learning on rails", but "Now with added AI!"

[0] https://en.wikipedia.org/wiki/John_Taylor_Gatto

[1] https://thesunmagazine.org/issues/186/a-few-lessons-they-won...


I really enjoyed Gatto's "The Underground History of American Education," it's a refreshing and entertaining rebuke of the authoritarian/scientific management consensus on schooling, which has not changed much over the past century and is not equipped to educate children for the modern era.


> Necessity and lack of control in a novel and frightening situation

So, life.

> It isn't sustainable, but it's not meant to be, because the goal is to get through it and get out, by becoming a different, better person.

So, again, life.


Well yeah, minus the goal being to get out (which will take care of itself... eventually) :)


Just because we all get to the goal doesn’t mean it’s not the goal… if the people are merely players then the last part of the part is to exit off stage.


For me, school was mostly frustrating.

I loved kindergarten and first and second grades which mostly seemed to be play. I think it was effective for me as far as creativity and socialization goes.

From third grade through high school I was bored with most of the material. A lot of it was not interesting and sometimes the pace was too slow.

At university, the work load was too high. I think if I had taken six years to complete the 4 year program, I would have been a lot better off. Too often I didn’t have the time to really dig into the material and explore related ideas (side quests). Instead I settled for memorization which was enough to do well on exams. My GPA at graduation did not reflect my command of the material.

Like Andy, I think an AI-powered course of learning could be great. The strength, I think, would be its adaptability. If while learning topic A I stumble across an interesting idea, it would have no problem with changing course and running down topic B.


I resonated with this urge to dig into the material a lot.

For me, two use cases of "digging" come up a lot: 1. I want to know how the concept I'm learning can connect with other concepts that I'm interested in (i.e., related concepts); 2. I want to know what other materials are available out there that can provide different perspectives (i.e., related resources). So I ended up building a map visualizing concepts (https://afaik.io/) where the proximity indicates relatedness and under each concept there are various resources attached.

In addition to those more "objective" connections, I think what AI could really help is to find a more "subjective" connection that's very user-specific and utilize those connections to build a personalized tutoring experience, hence the adaptability. For now, I think the barrier to realizing that level of adaptiveness is the high hallucination rate.


> At university, the work load was too high. I think if I had taken six years to complete the 4 year program, I would have been a lot better off. Too often I didn’t have the time to really dig into the material and explore related ideas (side quests). Instead I settled for memorization which was enough to do well on exams. My GPA at graduation did not reflect my command of the material.

This perfectly describes my experience too. My learning style is to 'ride the wave' of curiosity where i am obsessed with something and keep digging into it. Uni learning was antithetical to this learning style with strict schedules and tests. I got mostly As and some Bs by doing what you did but i didn't learn much of anything.

My son is somewhat like me and feel kind of sad to see him at a university to earn a living. I feel disappointed that i didn't provide him enough financial freedom to really gain enjoyment from learning istread of grinding at a uni.


My school experience was awful


I really like Andy Matuschak's ideas. What I don't understand though is his obsession with spaced repetition.

While I see the value in it, the cost for me personally is also very high. I was surprised to hear that with just 10 minutes of practice he can sustain up to 40 new cards added every day.

I did quantum country a while back and I sometimes needed more than an hour to get through all the questions in a single practice session. Maybe my brain is just slower at recalling things.


My initial reaction to those numbers are “you’d have to know the stuff in order to go through 40 new cards and a backlog in 10 minutes”, but the thing is, spaced repetition is for remembering, not for learning. We tend to treat anki as a tool for learning, but it isn’t.


> What I don't understand though is his obsession with spaced repetition.

I think Matuschak wants to be able to recall particular things ~forever, and spaced repetition is the best method known for that.

(Nit: Your sentence begins as a genuine expression of puzzlement rather than an outright criticism, but then you refer to Matuschak's "obsession", which pre-judges the issue. It would be more consistent to refer to his "emphasis on" or "advocacy of".)


Hah, I actually was looking for a better word than 'obsession', I even used the dictionary (English isn't my native language), but couldn't find one. Then I reluctantly picked obsession. 'Emphasis on' is much better, thank you!

Edit: Can't edit my original post though, time's up. But I would if I could!


> I did quantum country a while back and I sometimes needed more than an hour to get through all the questions in a single practice session.

That's very interesting! The review sessions cap at 50 questions, so that means well over a minute for each. Our design intent is that if you can't remember in a few seconds, you should mark it as forgotten, view the answer, and move on. The marginal benefit from exerting marginal effort to remember on your own from point is not high. But your comment suggests that it would be helpful for the interface to do something to suggest the tempo we have in mind. Thank you for sharing.


You ought to have the system detect slow progress / low success during the first 5 minutes, and then go "wait, this isn't working, try Plan B with much smaller chunks", and switch to drilling on a smaller number of questions over and over until the recall rate is high. Slogging through a long sequence of fail,fail,fail,fail does not generate enthusiasm or a sense of progress.

From your description, it sounds like, when a user flubs a question in a session and is shown the answer, you do not quickly re-test them on the same question during the session to improve recall, but just go on to other questions instead.


That's interesting. When you can't recall a card though, and you mark it as forgotten, don't you need some time to actively memorize it again?

Like, I often had cards asking me a question, and then I'd have to think a little bit about the question, then I couldn't remember, then I look at the solution, and I have to think again about the answer, and how it relates to other concepts.


> I couldn't remember, then I look at the solution, and I have to think again about the answer, and how it relates to other concepts.

Yes, that's exactly how it's supposed to work the first few times you try to associate the answer with the question. Then it begins to sink in, and you can remember the answer for a few minutes after seeing it. When you've successfully remembered it several times after short-delay, then the program increases the delay. When you've successfully remembered it several times after medium-delay, then the program increases the delay again.

> don't you need some time to actively memorize it again?

The rapid repetition of asking / being shown the answer multiple times IS how you actively memorize it.

If the deck is so large it takes you an hour to get through one cycle, there are too many cards in it. Start over with a deck that takes you only 5-10 minutes, and spend an hour going through several repetitions. When your rapid-recall rate becomes high, slowly add more new cards.


Yes, that does help a bit—elaborative processing. But it’s not something I’d consistently spend more than a few seconds per question on, generally speaking.


"retrieval practice" is a really efficient evidence based method [0]... maybe that's why they talk about spaced repetition

[0] https://journals.sagepub.com/doi/abs/10.3102/003465431668930...


I have rejected Anki and its competitors for learning. I found it shallow and a drag It need high initial investment (prep cards, commit to reviewing everyday) with 0 instantaneous results (a week or 2 in and the cards are still fuzzy). These are superficial problems.

My deeper beef with this method is the complete absence of emphasizing, discovering or forming connections between cohesive things. We're trying to learn, it's a super power to start seeing patterns in what we learn, it forms buckets that we can put new concepts and information in. Without it, the learning is ... shallow.

I found a better way. I map out full concepts to fit on single sheets of printer paper. Frontside has mostly words with lines connecting them or forming groups. The backside is for related drudgery (formulae, dates, numbers, names). I repeat new things everyday till I can reproduce the sheet front and back without any help. And then slowly introduce days of spacing between repetitions.

This is way more satisfying, no tech involved, no algorithms, just hard work and way faster. I do not have any evidence of this working long term. The things I put so much effort in learning to reproduce with such accuracy usually is useful in the short term only. So it works for me.


I've started doing anki for geography for myself and with my 9yo daughter.

We've been doing it for a few weeks and she now knows ~100% of all countries and their flags. Just absolutely domination level learning.

I think it's a matter of finding things that fit Anki, and not trying to fit Anki to the thing you want to learn. Geography is a perfect application: we all would be a bit more informed by knowing all countries, seas, etc; and it's something that Anki is very well suited for.

I've also added:

- the numerical value for letters (A=1, B=2, C=3, etc) which I think will give me greater powers of lexical sorting. We'll see.

- NATO phonetic alphabet

- multiplication up to 12x12. I neglected/avoided automating that stuff as a kid and my confidence in doing mental arithmetic is still low. Not sure this is a good case for Anki yet...we'll see.

- A custom deck with the faces and names of everyone at my work. This feels like a slam dunk. I am terrible with names, so I think this can up my game a lot.

In my experience it's hard to find things that feel marginally useful/fun to learn and that works with Anki. But when it does, it's amazing.


> the numerical value for letters (A=1, B=2, C=3, etc) which I think will give me greater powers of lexical sorting. We'll see.

Another way of doing it could be generating a deck with questions like:

"Q or P. Which comes first?"

I suppose that which technique will be superior depends on whether you usually sort things relative to each other, or relative to their container. If you have a fixed container of files, you could think, "ah, 'T', that's 20 (out of 26), I should look down 3/4 of the length of the container". But if the container wasn't evenly divided - for instance, your 'I' for 'Insurance' was a much thicker file than your 'T' for 'Taxes' or whatever - you'd no longer be able to use those numbers directly. What do you think?


I think I'll go with the numbers. It's a smaller set of things to memorize and it's kinda like a fun game that I can quiz myself on when seeing license plates when driving (I find driving horribly boring). My dad used to factorize numbers on license plates when I was a kid :P


I can't stop myself from seeing whether the consecutive digits in phone numbers add up to 10 :)


Maybe Anki is indeed useful for memorizing flags, faces, words in a new language, syntax of programming languages and names of chemical compounds... maybe.

The things I'm trying to learn like past economic decisions and investments and their impacts, logical fallacies, algorithms and data structures for my next coding round, database design patterns, areas where one system design pattern excels and sucks at with examples, all study areas where finding the core patterns and their applications is central to the learning process, to make any bit of real progress. Anki sucks so bad at this. m

my disgust at atomic spaced repetition, of which Anki is the cheerleader, comes from how gullible I was reading salesy pitches of "remember anything", "remember forever", "how i could memorize x in y days" kind of articles floating around suggesting it. It left a bad taste, like those As-seen-on-tv home exercise equipments and non stick pans with grifty promises.

Anki maybe useful, to some, but it falls apart for everyone as soon as you add any meaningful complexity beyond mapping 2 lists word to word.

So why do it? Why not learn things the wholesome way? With pen and paper ?


I think Anki zealots that pitch the software as the solution to everything can be both tiring and misleading. But I also think that memorization, using whatever system, is going to be a part of any kind of learning. As you mention, in some cases larger (medicine, foreign language vocab), in other cases smaller.

If we accept that all learning involves some memorization, I believe there's no harm in using the best tool for that specific job. I've seen a good amount of literature showing that SRS-like systems are indeed the best.


> We've been doing it for a few weeks and she now knows ~100% of all countries and their flags. Just absolutely domination level learning.

is this just for fun?


It’s for world domination, obviously. ;)


Yea. She is interested in flags and so I could sneak it in that way.


Multiplication tables is useful enough to memorize. Where I got tripped up is factors that look very similar to each other but has different answer.


> My deeper beef with this method is the complete absence of emphasizing, discovering or forming connections between cohesive things. We're trying to learn, it's a super power to start seeing patterns in what we learn, it forms buckets that we can put new concepts and information in. Without it, the learning is ... shallow.

I'm confused why you'd expect spaced repetition to serve this purpose. Did someone claim it would?

Yes, it is shallow. It's meant to be shallow. It's not meant to replace other tools to build connections. It's not meant to be a complete solution. You still need to apply the material to learn it.

Spaced repetition is for remembering/recall - not understanding. It's useful for people who have already done the work to understand (practice problems, etc), but would like to keep it in memory. If you are taking grad level analysis, and can't remember that a compact set is closed, because it's been 2 years since you took undergrad analysis, then SR will help you.


> My deeper beef with this method is the complete absence of emphasizing, discovering or forming connections between cohesive things. We're trying to learn, it's a super power to start seeing patterns in what we learn, it forms buckets that we can put new concepts and information in. Without it, the learning is ... shallow.

You can make connections that give you really deep intuition; it just takes practice making cards. I wrote about it here: https://jacobgw.com/blog/tft/2024/05/12/srs-intuit.html


What I like about the Anki approach is that it’s very conducive to scheduling and timeboxing. It removes a lot of variability from the process of memorizing things, and for me variability/unpredictability is a point of significant friction and a strong determiner of if I can consistently work on learning something or not.

In some cases one can also use decks made by others, which can help avoid wasting time on dry, unnecessarily fluffed up instructional materials with low signal-to-noise ratios like is common in university courses.


Andy's collaborator Michael Nielsen has a nice blog post, "using space repetition system to see through a piece of maths"[0]. He makes a point that the idea is to commit more and more higher order concepts to memory. But he does emphasise that Anki is one way to achieve his and a more simpler pen-paper method that you wrote might work.

[0] : https://cognitivemedium.com/srs-mathematics


I think learning with a network aligns with how our brain works better than learning linearly. In this sense, zettelkasten is a superior system than Anki, but I do come to realize that a network is harder to maintain as it's missing a clear starting point. Also I find it harder to recall, especially compared with linear stories. Another constant problem with any spaced repetition system is that you learn by repeating word by word, but essentially we memorize by adding more edges to the node of knowledge, just like how you know a person better if you add more tag to them: my son's friend/ our neighbor/ the boy who owns the white dog, etc. I'm pretty optimistic about this problem being solved by the next gen SRS: using AI to come up with different frames/description of the same knowledge.


Okay, you're already operating at a higher level. You can mostly-memorize entire concept maps and redraw them from memory. You realize most people can't do that? Lots of other people are still operating at the flash card level.


> I was surprised to hear that with just 10 minutes of practice he can sustain up to 40 new cards added every day.

It'd take me over 10 minutes simply to add those 40 cards.

But if he's referring only to reviewing, it's believable. Personally, I don't think I can handle more than 10-20 new cards per day.

> I did quantum country a while back and I sometimes needed more than an hour to get through all the questions in a single practice session.

Are you literally going through all the questions? Isn't the whole point of SR not to do that?

If you meant it took you an hour to go through whatever subset is due for that day, my next question would be: Are you reviewing daily? SR algorithms assume you review daily, and that's the only sane way to keep the time low. As an example, out of a 2000 card deck, I had to review only 6 cards the other day.


> It'd take me over 10 minutes simply to add those 40 cards. > But if he's referring only to reviewing, it's believable.

Yes, referring just to reviewing. Adding takes much more than ten minutes, as you say. And so, in practice, I don't saturate this capacity most days.


Well, I did go through the entire deck at some point because the SR algo gave them to me. But with 'all' I meant the ones that were given by the algo.


In my experience, the amount of time needed to remember things in Anki will slowly improve. The key word being slowly.

A lot of people get frustrated when they have been learning an item for weeks, answered the card over a dozen times, and yet still struggle to remember it. That’s because you need months for the SR algorithm to work properly. You really need to commit to it for the long haul for it to be effective.

Personally I can remember random words in obscure languages off the top of my head (instantaneously), purely because I added a card for them years ago and still keep current.


I stopped using anki because the stuff I chosen to remember and study are often of no applicable value or they are of use but does very little toward developing the skills I want.

I have no doubt that spaced repetition is a biological reality in how our memories work but I haven't found a way to make it resonate for me.


It depends on what you're trying to learn. There are certainly some things which don't benefit much from the flash card format, but I do think even then, these kinds of complaints are largely a failing of the user, not the app. If you get a little creative and experiment with the format of your cards, you can learn pretty much anything via Anki. Or at least enhance your knowledge of the skill you're ostensibly also practicing in real life.

The mistake most people make is simply adopting the "boring" flash card format of FRONT-BACK, and not incorporating other more creative types of cards. For example, question-answer, visualization, triggers, or unique images. ChatGPT is pretty useful for this, as you can get it to present the same information in a variety of different formats and contexts.


Interestingly, my experience is quite dissimilar at times. I've been using it for foreign language vocabulary acquisition (not multiple languages, just focusing on one language at the moment). I find that certain words just 'go in' easily, and I never have a problem remembering them as long as I review my Anki deck daily. For other words, they don't ever get substantially easier. I can still use Anki to help memorise them, but re-learning them is sometimes necessary.

A concrete example from my deck: word added 2024-02-24, three reviews, now with an interval of 3.03 months. No 'again' answers; nice and easy! Another word, added 2024-02-29, not so good: 14 reviews and an interval of 15 days.

I think there are several factors contributing to forgetfulness with Anki for me, some of which might overlap with your experience:

A: not properly 'learning' the word in the first place. For me, 'learning' means using the word in context, studying its etymology (even if I do not intend to memorise that) and saying it out aloud. If I don't do that at the beginning, it won't really stick until I effectively start all over again.

B: 'learning' words on a bad day, or too late in the day. Even if I 'learn' the words properly, I need to have learnt them in Anki when I'm feeling moderately energetic. If I'm exhausted mentally or physically, my rigorous learning strategy doesn't seem to translate into memory. When I notice myself doing Anki reviews much slower than usual due to tiredness, I generally limit the review count and try to catch up when I'm fresh another day.

C: not being consistent enough with reviews. Both the time spent on each individual review and the time spent in total are important - 4 seconds per word is a good sign for me, and strictly 20 minutes a day in total. That allows me to keep up a pace of 12 new words a day very consistently.

Would love to hear which parts of this sound familiar to you, or what other things you've noticed for yourself!


A common misconception is that you need to make Anki a daily habit. Bad days (when I'm tired, stressed or have a headache) would cause me to fail quite easy words that I could otherwise get spot on. Even if I already started my reviews and I notice any of that, I just cut the session short. It is ok if you do reviews only 80-90% of the time, the algorithm still works fine.

Whenever I fail a word, I try hard to find a reason for it. Most of the time it's interference - my answer resembles some similar word that I already know. I make a mental note about it, add this other word to the card. In most stubborn cases, some redundancy is good, I create another card for the same word in another context or just for a derivative of it.

Another thing that works for me is adding images (some of my cards just have a picture on the question side), and example sentences with the word in various contexts.


For other words, they don't ever get substantially easier.

This should (hopefully) get easier over time. From your dates there, it looks like the cards you have to re-learn often were only originally added 3 months ago? I think this should become easier for you in 6, 9, or 18 months – provided you continue to keep updated on the cards.

A. I definitely agree here. If I don't at least know understand a word to say, 30% confidence, I will never learn it, and will forever repeat it without making any progress. Personally I use images and sound to help make words stick in my mind. There's a lot of research about the effectiveness of imagery (see the "picture superiority effect.")

B. Ditto with #1. In scientific terms, this is called "encoding." Properly encoding things at the beginning has a big effect on your long term retention.

C. I do my reviews every morning while on the exercise bike. I use a gamepad to move through them more quickly, although I would technically be better with typing them out.

Also, as a side note, I have written a few blog posts on an old Substack about using Anki and AI tools:

https://neurotechnicians.substack.com/archive?sort=new

You might find the one about Using Images to Remember Things useful.


He is a knowledge hoarder which is closely correlated with learning but not the same thing. I have the same issue. I do agree with him wholeheartedly that spaced repetition bootstraps you well for actual learning. Until someone else comes up with a better way to bootstrap efficiently, I agree with him mostly and see value in his obsession.


I love spaced repetition but the issue I have with it is that if you fall off the wagon you have to basically just reset. It becomes insurmountable.

The gamification of finishing my queue doesn't work when it always has thousands of entries in.


Yeah, 10 minutes/day for 40 new questions/day seemed low to me as well. I agree with you that the plausibility of that figure depends on how long it takes to answer questions.

Matuschak writes:

> In my personal practice, I've accumulated thousands and thousands of questions.

> I spend about ten minutes a day using my memory system. Because these exponential schedules are very efficient, those ten minutes are enough to maintain my memory for thousands of questions, and to allow me to add up to about forty new questions each day.

It takes a while to formulate (and subsequently edit) a single good question (https://www.supermemo.com/en/blog/twenty-rules-of-formulatin...), but suppose we leave that aside and assume that by "using my memory system", Matuschak means answering previously-formulated questions. He says that questions should take only a few seconds each, so suppose it takes 10 seconds on average to answer a question. Then one could answer 40 questions in 400 seconds, which is under 7 minutes. That leaves 3 minutes to review roughly 20 questions of older material.


Yeah, that's more or less the math. My average is 6 seconds per question, so the mix is somewhat more older material.


I think he’s imagining the perfect tool for himself rather than a tool that could be widely adopted. He mentioned he doesn’t care at all about raising the floor with learning tools i.e. wide accessibility. Which is a kind of a shame given how much AI now allows


Learning well requires two circumstances for me.

1. I need a goal.

2. The time I need to practice to reach that goal needs to be reasonable.

This makes endeavours like learning to speak a language, to play an instrument, or getting buff unsustainable for me.


> This makes endeavours like learning to speak a language, to play an instrument, or getting buff unsustainable for me.

I think your goals are just far too abstract, you're thinking years into the future instead of something more immediate. Here's how I'd "atomize" them:

- Learning to speak a language → Being able to answer some basic everyday questions: What time is it? What's the weather like outside? How am I feeling? When's my birthday?

- Playing an instrument → Being able to play one popular song of my choice to a reasonable degree.

- Getting buff → Being able to look myself in the mirror and see progress.

Suddenly all of them are very achieveable within a month or two. I then either lose interest or set myself some "higher" goal. Can I up that to three songs? Can I describe my work or hobbies in $targetLanguage? Is there a body part I'm especially interested in improving? Then that becomes my new "project" for the next couple of months.

Rinse and repeat, all the way until I can speak German (not quite, but I can point to my A2 certificate and call myself an advanced beginner), or play a piano (not quite, but enough to easily impress anyone that never tried), or until I feel good about the way I look (not quite, but it is an indisputable fact that I look better than ever before).


What do you consider a "reasonable" amount of time?

(I probably come from the opposite viewpoint: I consider the few axes along which I've devoted over a decade to learning as being the particular high-dimensional corners making me me, rather than any of my 8 billion other conspecifics)


I like a lot of these ideas. Some of this is built in to the ChatGPT desktop app. Other parts of it could be glued together from existing tools. Others are still beyond the capability of LLMs. Lots of people thinking along the same lines and there will be a lot of products taking a shot at this or similar uses. When one sticks it's going to be a big, if somewhat niche, hit. I know I'd use it.


> This AI system isn’t trapped in its own chatbox, or in the sidebar of one application. It can see what’s going on across multiple applications, and it can propose actions across multiple applications. Sam can click that button to view a changeset with the potential implementation. Then they can continue the conversation, smoothly switching into the context of the code editor.

That would be an incredibly valuable tool beyond learning, a killer feature for an operating system.


New windows PCs will come equipped with a co-pilot that can see everything on the screen. I wonder if this vision Andy laid out is more like a ChatGPT co-pilot that has all context of your screen/life, rather than a "learning tool". With this in mind, is it even worth building assistants that will behave similarly to ChatGPT?


That's the ChatGPT desktop app


The method for Meno's slave was really the way to learn - at least for me.

Example: How do you land on the moon? Build a rocket and land it.

Divide the problem into 99 steps involving Gravity (math), rockets (mechanical, combustion, chemical), trajectories (advanced math), and then life support (biology, math)

Learn each of those steps and with enough money you can get to the moon.


Does anyone know what tech stack is used to align the audio with the text? How does specific section gets highlighted when the video reaches that mark and vice versa on how the video jumps to location where the text is clicked?


The intro veers dangerously close to this, which I've read with abject horror: https://news.ycombinator.com/item?id=40425306


It could be misused and leak vital data but so can your browser history and the data that ISPs and social media and search companies have on you. They say all processing will be done locally but you'd have to be a fool to trust that. I'd prefer to see this as a product that you could connect to your computer which would take care of all the processing and storage and have reasonable guarantees on privacy and encryption. I'm sure we'll see something like that, unsure of how successful it would be in the long run. It is a useful feature, being able to "recall" everything you've seen on your computer. I get the naming but it's bad branding since recall has negative connotations and is such a common word with many uses. Time Machine was already taken ;P They should have gone with something more generic like Windows History or Microsoft Memory. If they wanted to be cheeky then Tip of the Tongue or Snappy the helpful screenshot as an animated camera would have been better.


> If you wanna remember anything create a picture, a pattern, a story, or rhyme. To learn a skill break it down and then rehearse the sub skills.

from some podcast I forgot


An AI feedback loop would be great, but AI is really bad at being correct through my experience.


Killer framing. This is the point, this is the thing. Extremely well set up, beautifully crafted words, on the utmost of topics. Would that we be getting anywhere here!

> Subordinated to an authentic pursuit . . . Diving into a brick wall.

Alas software usually is the wall, keeps us from grasping true understanding & development. Interface most often is a wrapper high above the core of software. We trap users, keeping them away from authentic & self directed experience.

There was a great submission hours before on Enlightenmentware, on softwares that have enlightened us. Letting users into the natural philosophies underlying software, bringing software from "wizards" and "just works" to an age of reason for users. I think this underlies everything setup here; it's the tales of systems that fomemted implicit learning well! https://news.ycombinator.com/item?id=40419856

> Learning by emersion works nautralistocally when the material has low enough complexity relative to your prior knowledge that you can process it on the fly . And natural participation reinforces everything important giving you fluency when those conditions arent satisfied - which is most of the time - you will need some support. You want to just drive in and you want learning to just work.

This is such a beautiful capstone for the bridge IT ought be building. It speaks to the need for general systems research, new (or improved I guess) systems for operating many processes (and their sprawling subprocesses/subroutines/promises) and seeing them run. It necessitates being free to take that live world and tinker and run and rerun experiments. With safety (and the already spoken of visibility). Opening the option to become acquainted with capability and intent.

I'm less clear on the guided parts. But my thesis is that software fails to have sufficient starting conditions for most of these good implicit / guided learning loops to begin. We are trapped in a place where everything is arbitrary interface & none of it is learnable at all (to any honest depth). So we have no mental leverage to begin building mental muscles with.

This speaks deeply to me, as the shame of our industry & the shining endless journey we should be so excited to be exploring. That we haven't been trying for broader software ecosystems, for more visible and malleable software is a resounding quaking mystery. This is the great open ended quest, is the real journey of what we are doing, and we are not only flat failing to heed the call of this grand adventure, we are worse, to its detriment, building infernal machines that trap us. Even before machine learning, we were already far down the winnowing closing path spoken of in Dune, and some day I hope the sleeping computing world might awaken from this slumber,

> Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

This talk sets up the greatest call for computing humanism that I have ever read. Throwing off the spheres of control & helping each other reach to the poles is the point of these days is the point, and computing's chance to be the vanguard pushing that forward is colossal, and keeps rising & getting yet more possible.


Thanks for the kind words. I'm happy with the framing of this talk, and much less sure about my proposed solutions. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: