These "frameworks" are useless, and as you say you can do what they offer in a few hours and better. So, stop using them. It's not a popularity contest.
Case in point: I don't use any frameworks whatsoever. I wrote a conversational AI Agent that helps me write AI Agents, integrated that into an office suite, unleashed that at the law office where I'm CTO, and we currently have just shy of 900 agents created by me, the attorneys and their staff. They support new client interviews, legal research, document authoring, case financial modeling, and pretty much whatever the staff needs to help them do what they already did before AI, just now they have idiot savant help.
Everything is based upon chat completion with most using structured output that is I/O with the office suite's internal data structures. The AI Agents act as virtual co-workers inside the office software used by staff, and they each personalize their agents to their needs.
If I tried to do this with these frameworks, I'd still be dinking around with their abstractions and lack of documentation.
I used open source implementations of a web browser based word processor and a spreadsheet, which by the fact that they are open source their source code itself as well as the developer's GitHub and various support forums for those open source tools are in the major LLM's training data.
Typically, I begin by creating a chatbot that I've seeded (told) it knows some open source tool because it was a contributor to that open source project, and I engage in a conversation with it to figure out what data structures and what APIs within that tool are useful for data retrieval and data injection of the application's active in use data. For a word processor, that would be the document itself, and for a spreadsheet that is the cells and their data and/or formulas.
Then, I examine the different ways to get data from and put data into the tool, and write a single purpose AI Agent for each different form of I/O. For example, when working with the word processor there are different things a person might want an agent to do, such as revise the layout of the document. That requires the HTML/CSS format of the document and there is an agent that handles those requests. It is easy to have a document whose HTML/CSS representation is larger than an LLM model's output, and that triggers the creation of two more agents: one that operates on the current selection, meaning it works on a subset of the larger document only, and another that breaks the document into chunks and processes it in parts small enough for the LLM's available output, which requires additional handling to insure the chunking of the document does not affect contextual flow of the transformed document text.
Other things a user might want to do with a document concern the content of the document, the words themselves. For example, a user might want a literary critic to assess how well they wrote something and how understandable it will be for an audience of some characteristics. That type of question does not require the HTML/CSS of the document, it only requires the words, and if delivering the HTML/CSS along with the words the HTML/CSS gets in the way and the LLM has to do extra work to filter the HTML/CSS away to even being looking at the writing quality. Plus, if only sending words, not the HTML/CSS, a lot more information can get delivered to the LLM to consider than can be send as HTML/CSS.
Yet another, different type of question that my system supports is using a document as the context seed to create a new chatbot that knows the subject of the document more than the document itself contains, and is able to identify incorrect portions and misleading or confusing portions of the original document. This feature is very useful with spreadsheets, because general knowledge of using a spreadsheet is weak in most people. The "spreadsheet discussion bot" I have will reverse engineer an unknown spreadsheet and explain how to use it to a person, as well as identify questionable formulas and methods that the spreadsheet may be using.
Each of these examples represent a different type of data representation and it's use from the same document. All I'm doing is figuring out each tools internal data representation and then taking it, changing it with an LLM, and then putting it back into the tool, which then uses that changed data unaware anything changed.
Of course, what I do with the data using an LLM is separate and can be complex. I have written my own prompting framework I call "method actor promoting" that has two layers: first I tell the LLM that it is a method actor, using the formal terms method actors use. This creates an impersonating LLM that goes further, goes deeper in the impersonation than just a plain LLM. Then I tell that "method actor" the role they are playing is a subject matter expert in whatever it is that the user is trying to do. Then communicating with that LLM Agent requires the user to actually treat them as if the are that expert, and when interacting with them using the terms and language that expert would expect when discussing their professional vocation.
My prompts are larger than most, but my replies from the LLMs tends to be very high quality.
I've also written what I call "chatbotBot" a chatbot that will conversationally gleam from the user a new agent they need, and then chatbotBot writes that new agent, or modifies an existing agent to suit, and then integrates that into the system for immediate use. Then there's "agent morphing" where an agent can have their knowledge and skills morphed to another set of knowledge and skills - which is very useful for some of the more complex agents (the spreadsheet agents) that have complex prompts and are difficult to modify.
You can check this out for yourself at https://midombot.com/b1/home I'm building in public, more or less, with little to no fanfare. What you see was all hand coded by my, except for the word processor and spreadsheet tools, which as I've describe above, I've heavily modified. I do not use AI coding tools, I find they do not help. But I do have multiple coding 'bots I've written that I converse with all the time about the strategy of coding.
Case in point: I don't use any frameworks whatsoever. I wrote a conversational AI Agent that helps me write AI Agents, integrated that into an office suite, unleashed that at the law office where I'm CTO, and we currently have just shy of 900 agents created by me, the attorneys and their staff. They support new client interviews, legal research, document authoring, case financial modeling, and pretty much whatever the staff needs to help them do what they already did before AI, just now they have idiot savant help.
Everything is based upon chat completion with most using structured output that is I/O with the office suite's internal data structures. The AI Agents act as virtual co-workers inside the office software used by staff, and they each personalize their agents to their needs.
If I tried to do this with these frameworks, I'd still be dinking around with their abstractions and lack of documentation.