So A* is the most optimal search algorithm under the specific constraints it specifies. You can't do better.
However, sometimes the specific domain you are searching has other constraints that can be exploited to do better than A*. An example of this being Jump Point Search that exploits certain properties of grid searches if you can only move in certain ways.
If you were able to write a general searching algorithm that can effectively exploit the whatever the specific properties of the underlying domain "automatically" without you having to actually work out what they are, that would be useful right?
Because they used a transformer to reach a nice solution, better than a typical A* search, which is a "naive" or go to solution. And they didn't think about designing an algorithm. I find it quite impressive that a simple encoder-decoder transformer can achieve so much.
It is in the abstract, very first line. "While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks ..."
This paper is interesting (to me) as an example of how to use transformers for decision making. I don't particularly care if it is up to A* standards at the moment.
> What is the scientific contribution of the paper?
That's not a question you ask other people, that's a bullet point at the top of the outline you created while reading the paper for yourself. You should see the creation of said outline as a measure of your actual interest in the subject.
Having now read the paper, the research area is interesting because optimizations in optimal path finding have applications in robotics, gaming, and reasoning (what the ultimate intention of this paper is).
The research team identified ways to tokenized path finding algorithms for two tasks maze solving and sokoban (a game where a crate has to be pushed to goal) and then trained a model on the execution traces of these algorithms.
The insight this provided was that the "searchformer" model was about 26% faster than the traditional methods. If that is applied to Route-planning, Robotics, and Game Development, it could have tangible performance benefits.
IMHO, it is not a wild breakthrough but an interesting solution to a real-world problem.
Because it's further evidence for the "unreasonable effectiveness of transformers", i.e. that transformers are a fully-general approach for all kinds of learning tasks, not just next-token prediction. Obviously there is a strong and a weak version of that hypothesis and the strong version is probably not true, but to the extent that we appear to be honing in Nature's "one true way" to learn how to accomplish a task, that seems like important news.
In certain computer science problems, a suboptimal action at time t may give rise to an optimal outcome at time >t.
Why wouldn't this be the case for research generally? Has our community really devolved to the point where things should only be noteworthy insofar as they optimize for SOTA for a given problem?
I agree. OP's comment could be quickly rewritten into something useful and just by changing the tone, for example:
"26.8% fewer search steps than standard A∗ search"
For reference of prior art, it's slightly better than A*, which is far from SOTA on Sokoban (https://festival-solver.site/).
So, slightly better than A* which is far from SOTA on Sokoban (https://festival-solver.site/).
What is impressive in this paper? Why is this on Hacker News?