>For instance, local entropy, the loss that Entropy-SGD minimizes, is the solution of the Hamilton- Jacobi-Bellman PDE and can therefore be written as a stochastic optimal control problem, which penalizes greedy gradient descent. This direction further leads to connections between variants of SGD with good empirical performance and standard methods in convex optimization such as inf- convolutions and proximal methods.
You clearly didn't read the article. What are you commenting on, then? It seems to be about your own understanding, since it's certainly not about the article.
my understanding is that the issue is that the full Hessian of the loss is too expensive to compute at each step for the relative size of the increase in learning speed