I've been doing LLMs for 3 years. lots of "agentic" stuff that people talk about nowadays were already possible with GPT-3 back in the day. it just seems like "agent" is a new buzzword intended to keep the AI hype going, which is fine, except that no one really defines agents beyond simple function/tool calling.
The discourse, has been taken over by hype - I'll give you that. I'm in the space [1] and I'll try my best to answer.
Building on your definition, I'd say an agent is is a collection of LLM calls with structured outputs.
You can give an LLM some context and ask "what's the next step?". And agent does that recursively, with some exit condition.
Those structured outputs inform the control flow of the agent, so much so that at the end of the execution, you can argue that the agent has "written its own control flow".
The structured outputs may produce:
- Function calls, and their inputs (tool calling)
- Some reasoning text (for the context)
- An exit condition evaluation (done=true)
Building on your definition, I'd say an agent is is a collection of LLM calls with structured outputs.
You can give an LLM some context and ask "what's the next step?". And agent does that recursively, with some exit condition.
Those structured outputs inform the control flow of the agent, so much so that at the end of the execution, you can argue that the agent has "written its own control flow".
The structured outputs may produce: - Function calls, and their inputs (tool calling) - Some reasoning text (for the context) - An exit condition evaluation (done=true)
Hope that helps!
[1] https://github.com/inferablehq/inferable