Doesn't that workflow sound backwards? I'd much rather write the tests themselves (so I can be sure it's working as expected in my mind + get a somewhat reasonable interface for the functions), then let a LLM write the implementation, than the other way around.
I guess the end-goal is the same as "Acceptance Testing", have the LLMs write both unit tests and implementation. Just missing the piece of letting the LLMs debug the inevitable crashes that will happen in production, and be able to hotfix it, so the website visitors (LLM crawlers) don't complain.
The recent Blogpost about reaching V1 was quite a sobering reflection.
It always amazes when a startup takes on a really hard problem with initial excitement to change the world just to end up posting a blog post that, well, they found out it's a hard problem and they better pivot to make some money. Probably have read dozens of those posts before and, as it seems, the trend still goes strong.
Pythagora pivoted to full app development using AI. There's an open source (FSL-MIT) licensed core[0] and a VSCode extension[0]. There's an overview video of just-released v1 (early access), more info at https://www.pythagora.ai/
I wish it could write integration tests. That requires an understanding of the testing scaffold and the fake data it produces. LLMs aren't ready for that yet, but someday it will be magical.
I guess the end-goal is the same as "Acceptance Testing", have the LLMs write both unit tests and implementation. Just missing the piece of letting the LLMs debug the inevitable crashes that will happen in production, and be able to hotfix it, so the website visitors (LLM crawlers) don't complain.