No sadly they just voicing the opinion already voiced by (many) other scientists.
My masters was text-to-sql and I can tell you hundreds of papers conclude that seq2seq and the transformer dérivâtes suck at logic even when you approach logic the symbolic way.
We’d love to figure production rules of any sort emerge with scale of the transformer, but I’m get to read such paper.
It's also true that Apple hasn't really created any of these technologies themselves; afaik they're using a mostly standard LLM architecture (not invented by Apple) combined with task specific LORAs (not invented by Apple). Has Apple actually created any genuinely new technologies or innovations for Apple Intelligence?