If I were to guess I imagine the downvotes are due to the use of an upvote being preferred over (albeit it well intentioned) comments of "well done" in HN threads (in order to keep signal to noise ratio high)
I never understood that. The presence of simple comments loke "Nice!" or "I agree" are really rare anyways, and I don't find it difficult to scroll past them like I would with any other comment whose first few sentences I don't find salient
Fair, I mostly was thinking it would be nice to give people who want to try it themselves the term that’s most often used.
It’s hard - you have to eat around maintenance level calories but you also need to make a high percentage of them protein and also keep enough carbs that you don’t bonk if you’re doing any cardio (I like jump rope myself). Just cutting or bulking gives a little more flexibility.
https://mealplannr.io
The end game is no/low touch weekly meal plans send directly to your inbox, with meals you love to cook but with none of the hassle around planning the meals, shopping list etc (which I spend hours doing every week).
An important feature for me was improving the recipe discovery experience, you can build a cookbook from chefs you follow on socials (youtube for now), or import from any source (Web, or take pic of cookbook etc) - it then has tight / easy integration into recipe lists.
Utilising GenAI to auto extract recipes, manage conversions, merge/categorise shopping lists etc - as-well as the actual recommendations engine.
If anyone is interested in beta testing / wants to have a chat I'll look out for replies, or message mealplannr@tomyeoman.dev
I played around with it this week, and when you enable advanced mode and add a post-transcription AI model to point to your own server which mimics a minimal ChatGPT-compatible behavior, then you can use it to modify the output, even return an empty string if you noticed that the transcript was more targeted to do other stuff ("turn the lights on"), if you then return an empty string, it won't inject keypresses.
So one gets the best for both worlds: transcription for dictation and transcription to trigger events.
If I now only could let it listen constantly and react to voice, so that no push to talk is active, that would be nice.
Maybe this project here could be used for that.
Also, this seems to support streaming transcription.
"We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns"
Why would they ever make this statement, there's nothing to gain with that right?
This is very "distant" suggestion if you enjoyed Antimemetics, but The Unconsoled by Kazuo Ishiguro is another one of my favourites, and it too explores this idea of unreliable and inconsistent memories, although from a completely different angle.
I consider Recursion by Blake Crouch to be similar, even though I liked Antimemetics much better. I haven't read Crouch's other books, but have heard that Dark Matter is better than Recursion, though it may be less similar to Antimemetics.
I've enjoyed Peter Watts in kind of similar way I enjoy qntm. It's nerdy, explores interesting ideas, and written by a professional in a field who draws on their education, skills and interests. Premier work is probably Blindsight but the Sunflower cycle stories are likely easier to start. Like qntm, a lot of his works are online for free:
The end game is no/low touch weekly meal plans send directly to your inbox, with meals you love to cook but with none of the hassle around planning the meals, shopping list etc (which I spend hours doing every week).
An important feature for me was improving the recipe discovery experience, you can build a cookbook from chefs you follow on socials (youtube for now), or import from any source (Web, or take pic of cookbook etc) - it then hastight / easy integration into recipe lists.
Utilising GenAI to auto extract recipes, manage conversions, merge/categorise shopping lists etc - as-well as the actual recommendations engine.
If anyone is interested in beta testing / wants to have a chat I'll look out for replies, or message mealplannr@tomyeoman.dev
Every interaction has different (in many cases real) "memories" driving the conversation, as-well as unique persona's / background information on the owner.
Is there a lot of noise, sure - but it much closer maps to how we, as humans communicate with each other (through memories of lived experienced) than just a LLM loop, IMO that's what makes it interesting.
I would say fairly substantially different for a few reasons:
- You can run any model, for example I'm running Kimi 2.5 not Claude
- Every interaction has different (likely real) memories driving the conversation, as-well as unique persona's / background information on the owner.
It much closer maps to how we, as humans communicate with each other (through memories of lived experienced) than just a LLM loop, IMO that's what makes it interesting.
reply