Interesting stuff! I've been trying ollama/gpt + continue.dev and copilot in VSCode for a bit now and the chat-style assistant is a huge plus. Especially in DevOps (my main work) the codegen is rather unhelpful, but the ability to let LLMs explain some sections and help in rubber-ducking is huge.
I see that a good output requires a good input (prompt). How does copilot workspace determine a good input for the prompt? I see that in the github repo there is already a bunch of "Tips and Tricks" to get better results. What is your experience so far? Should we change our way of creating issues (user-stories / bug-reports, change-requests) to a format that is better understood by AI/Copilot? (half-joking, half-serious).
Well, that's basically the heart of Copilot Workspace! The whole UX is structured to make it easy for the human to steer.
- Alter the "current" bullet points in the spec to correct the AI's understanding of the system today
- Alter the "proposed" bullet points in the spec to correct the AI's understanding of what SHOULD be
- Alter files/bullets in the plan in order to correct the AI's understanding of how to go from current to proposed.
That said, I think there's definitely a future where we might want to explore how we nudge humans into better issue-writing habits! A well-specified issue is as important to other humans as it is to AI. And "well-specified" is not about "more", it's about clarity. Having the right level of detail, clear articulation of what success means, etc.
I see that a good output requires a good input (prompt). How does copilot workspace determine a good input for the prompt? I see that in the github repo there is already a bunch of "Tips and Tricks" to get better results. What is your experience so far? Should we change our way of creating issues (user-stories / bug-reports, change-requests) to a format that is better understood by AI/Copilot? (half-joking, half-serious).