Agents are just regular LLM chat bots that are prompted to parse user input into instructions about what functions to call in your back-end, with what data, etc. Basically it's a way to take random user input and turn it into pseudo-logic you can write code against.
As an example, I can provide a system prompt that mentions a function like get_weather() being available to call. Then, I can pass whatever my user's prompt text is and the LLM will determine what code I need to call on the back-end.
So if a user types "What is the weather in Nashville?" the LLM would infer that the user is asking about weather and reply to me with a string like "call function get_weather with location Nashville" or if you prompted it, some JSON like { function_to_call: 'get_weather', location: 'Nashville' }. From there, I'd just call that function with any the data I asked the LLM to provide.
Relative to that scale, L2 is how I've come to understand it. It's kind of soft-sold as L3 but that will require quite a bit of work on the vendor side (e.g., implementing an AWS Lambda style setup for authoring functions the LLM can call).
As an example, I can provide a system prompt that mentions a function like get_weather() being available to call. Then, I can pass whatever my user's prompt text is and the LLM will determine what code I need to call on the back-end.
So if a user types "What is the weather in Nashville?" the LLM would infer that the user is asking about weather and reply to me with a string like "call function get_weather with location Nashville" or if you prompted it, some JSON like { function_to_call: 'get_weather', location: 'Nashville' }. From there, I'd just call that function with any the data I asked the LLM to provide.