The article is amazingly short of examples AFAICT. Even the penguin example doesn’t show the interesting result it got - it just tells us that it found an interesting result.
This sounds interesting, but the author seemed to omit what would make this a really compelling article.
Agreed. I like the idea ( because it makes intuitive sense ), but it will take some time for me to learn whether it works. If there is a reason why I am skeptical, it is because author seems to share some of my background characteristics and verbiage in github is basically what gpt would recommend for my mini projects.
It does not it make less interesting, but it does add some interesting layer to verifiability. Which kinda sucks, because it means we are officially at a point where we will need to actively screen stuff that is genuinely worth diving into.
Unreadable AI slop. If there's anything of technical interest in here, it's buried too deep underneath the parade of LLM clichés and self-aggrandizing marketing drivel.
This sounds interesting, but the author seemed to omit what would make this a really compelling article.