I am the solo dev behind ReelDecks. I built this because I was frustrated with forgetting almost everything I learned from educational YouTube videos. Passive watching just doesn't work for retention.
My goal was to create a one-click tool that turns any video into an active study session with flashcards.
The initial version was simple: it analysed the video's transcripts. But I quickly hit a wall. The real challenge was the huge number of valuable videos that have bad transcripts or are purely visual (like coding demos, data visualisations, or product walkthroughs).
So, I re-architected ReelDecks into a smart hybrid system:
First, it tries to use the transcript, which is fast and cost-effective.
But if the transcript is missing or low-quality, it automatically switches to a visual AI model to literally 'watch' the video and create flashcards from the on-screen text, code, and diagrams.
This means it works reliably on everything from a university lecture to a silent coding tutorial.
I just launched and this is the first place I am sharing it. I would be incredibly grateful for your honest feedback on the concept, the execution, and any features you think are missing.
This is awesome. The promise of reliably preserving formatting and placeholders like {{username}} is the absolute killer feature here.
I'm really curious about the prompt engineering involved. How do you instruct the model to translate the surrounding text while ensuring it never touches the placeholders or any embedded HTML, especially since syntax can vary between languages?
That seems like the most difficult part of the problem to solve robustly.
This is brilliant. The NFC tag integration is clever, but the mention of using the existing tags in library books is the real killer feature here. It solves the biggest friction point of needing to buy and set up your own tags.
I can imagine a magical workflow where you just tap a library book as you leave and it’s automatically added to your 'Currently Reading' list in the app, ready to be timed. That's a fantastic bridge between the physical and digital worlds.
Thanks so much! Yes, the embedded NFC chip in library books really removes the biggest friction point: you don’t need to buy your own tags (they’re only about $0.30 each, but most people simply don’t have one at hand).
In BookPace, every NFC tag, whether it’s your own or one from a library book, is treated as a black box. The app associates the book with the unique ID of the tag. Setup takes less than a minute and only needs to be done once — after that, every tap automatically recognizes the book.
I do wish we could read more structured info directly from library tags (like title, author, or ISBN) so users don't need to add the book info in the first place, but that data is usually encoded in a vendor-specific format that varies across library systems. Even so, just leveraging the existing tag IDs already creates a surprisingly smooth bridge between the physical and digital worlds.
The vendor-specific data format is a tough problem for sure, but the unique ID approach sounds like a great way to deliver a smooth user experience right now. Cheers!
This is a fantastic toolkit, and the discussion in the comments about the learning benefits of manual creation is spot-on.
I think the real killer feature here isn't just bulk-generating new cards, but enriching existing, manually-created ones.
My ideal workflow would be:
Manually create a basic card when I encounter a new word (e.g., the word and the sentence I found it in). This preserves that crucial "moment of discovery" and initial learning.
Once a week or so, run anki-llm as a batch process on all new cards to add powerful, context-rich fields like: etymology, common collocations, or subtle nuance (vs. a similar word).
This way, you get the best of both worlds: the initial learning from manual creation, followed by automated enrichment that would be too tedious to do by hand. Really powerful stuff, great work!
This is awesome, and your post about coming back to front-end after the jQuery era really resonates. It’s a whole new world! Major props for diving into web components and shipping this.
One small suggestion that might be a fun addition for a power user: keyboard shortcuts. Being able to hit Spacebar to start/stop the most recently active timer, or N to focus the "Add Timer" input field would be a great little UX enhancement.
I am the solo dev behind ReelDecks. I built this because I was frustrated with forgetting almost everything I learned from educational YouTube videos. Passive watching just doesn't work for retention.
My goal was to create a one-click tool that turns any video into an active study session with flashcards.
The initial version was simple: it analysed the video's transcripts. But I quickly hit a wall. The real challenge was the huge number of valuable videos that have bad transcripts or are purely visual (like coding demos, data visualisations, or product walkthroughs).
So, I re-architected ReelDecks into a smart hybrid system:
First, it tries to use the transcript, which is fast and cost-effective. But if the transcript is missing or low-quality, it automatically switches to a visual AI model to literally 'watch' the video and create flashcards from the on-screen text, code, and diagrams. This means it works reliably on everything from a university lecture to a silent coding tutorial.
I just launched and this is the first place I am sharing it. I would be incredibly grateful for your honest feedback on the concept, the execution, and any features you think are missing.
Thanks for taking a look!