This is an interesting idea. Have you considered allowing different models for different chat nodes? My current very primitive solution is to have AI studio on one side of my screen and ChatGPT on the other, and me in the middle playing them off each other.
Yes, you can switch models any time for different chat nodes. So you can have different LLM review each others work, as an example. We currently have support for all the major models from ChatGPT, Gemini, Claude and Grok. Hope this helps
How granular are the specs? Is it at the level of "this is the code you must write, and here is how to do it", or are you letting AI work some of that out?
Second this. I've used both Postgraphile and Prisma on production projects (they can work well together too). Both are cool, but Postgraphile is way more mature.
I don't do much JS (as you can likely tell) but found the task approachable. I think that the low barrier to entry is a good thing for an interview question.
My approach is described in the comments. I started by writing it out and then went from there.