Hacker News new | past | comments | ask | show | jobs | submit login
Generate entire books in seconds using Groq and Llama3
23 points by _jtyq 13 days ago | hide | past | favorite | 10 comments





When I was younger, I read some self help book. The author, in an attempt to prove the effectiveness of resolve, said the had written the entire book in a day. I put it away immediately and never read any further. If this dude didn’t spend more than a day to write it, I’m not going to waste my time reading it. That was my immediate reflex.

I may make an exception for Nietzsche, who sometimes did write whole chapters in a day or two of good health. But for a mere mortal, I expect proof of work.

Now how inclined am I to read a book that someone generated with AI within a few minutes? I’m not even gonna read the _announcement_. Call me a fool, or old fashioned.


Wow. I wouldn't even read a book that took 10 years to be written unless enough other people have read it and vouch for me that the book is actually good.

Call me old fashioned but I honestly thought that's how one selects the relatively few books one can actually read among the gazillions of books that get written in the wild.


> But for a mere mortal, I expect proof of work.

How can you make a fair comparison between the two if you haven't read one?


Constructively using LLMs tends to require validating the quality of their output; even when not hallucinating content, they do hallucinate confidence. Increasing the size of the output dramatically increases the effort needed to validate it.

Both examples have sections where the model simply left in placeholders for concepts - in example 1, "Summary of Key Takeaways" repeatedly references `[Book Topic]`, while example 2 starts doing so much earlier in "Overview of the Book".

What is the goal of this project? If it is meant to be more than a learning exercise, I'd hope for a lot more investment on quality control for the final output (but then I'd perhaps be even more worried that people would trust that output uncritically).


The goal was to showcase a task where Groq's speed would be useful while showing what current LLMs can and can't do with a task like book generation. That's why the placeholder content is mentioned in the limitations section. The end books are definitely not perfect, but I am impressed by the generations nonetheless, especially since the majority of the content is generated using Llama3b-8b.

I don't think a publishable-quality book can currently be generated this way. I do think it is a helpful tool for generating an entire book on any nonfiction topic you want to learn more about, no matter how specific.


This will produce garbage. I've experimented with getting content out of LLMs and it requires very careful context and system prompt grooming from multiple angles to get the things to produce and refine just single, coherent scenes for fiction. Just telling it to produce an outline and then iterate the sections to generate them will result in a collection of confabulated essays that won't be a coherent whole.

Great, now make one that takes in an autogenerated long form garbage like this book and summarizes it into a prompt so I don't have to waste time reading it.

A little more constructive: The difference between long form content like this and the 'medium' of interacting with a model is that a book is static while a model allows for a interactive consumption of content. I'd lean more into that that this.


I can't think of any use case for this beside flooding online book stores with garbage. The grift never ends!

I was thinking more along the lines of a young lady's illustrated primer

they should add filters in the store search results now: published before 2023.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: