Yes, LLM don't improve as humans do, but they could use other tools for example designing programs in Prolog to expand their capabilities. I think that the next step in AI will be LLM being able to use better tools or strategies. For example, designing architectures in which logic rules, heuristic algorithms and small fine-tuned LLM agents are integrated as tools for LLMs. I think that new, more powerful architectures for helping LLMs are going to mature in the near future. Furthermore, there is the economic pushing force to develop AI application for warfare.
Edited: I should add, that a Prolog system could help the LLM to continue learning by adding facts to its database and inferring new relations, for example using heuristics to suggest new models or ways for exploration.
If you have a a Numpy function that takes for example two arguments, my proposal is to add a optional argument that allows that function to be applied to cells of dimensions i and j by function_name(a,b,range=(i,j)) so that the function is applied to subarrays of dimension i of a and subarrays of dimension j of b to create a new array. The broadcast operation and the axis arguments are not a general solution. I J you have such mechanism as the example shows.
guvectorize in numba seems to be a good approximation to the rank concept I mentioned, but is not a complete solution. Unfortunately I don't have the time now to study it and make a complete comparison, but guvectorize is in the good direction. Thanks for providing that information.
Edited: I should add, that a Prolog system could help the LLM to continue learning by adding facts to its database and inferring new relations, for example using heuristics to suggest new models or ways for exploration.