Are these copilots essentially all the same underlying model, but with various configurations creating a different prompt for the model? And to create your own, you configure it based on your project specs? Just making sure I understand what's happening. If so, it's kind of a cool and "simple" solution to making a model work better. Simple in quotes because nothing in software is actually simple, but the idea is.
But in short: yes, thats how it works as of today! Our docs go into a bit more detail here.
But this space has been moving so fast that usually how it worked last week issn’t how it works this week.
You could easily imagine us leveraging fine tuned world models models on top of prompting in the near future.
Another thing to add is that we use a bunch of different models for our Copilot implementation…some popular off the shelf ones as well as some that we trained ourselves.
The really cool thing that I forgot to mention here is that it makes the model composable
As you provide requirements and specs via properties to your project OR by adding components (which have their own properties) you are implicitly optimizing the model towards your use case (eg Analog Audio Aerospace applications)
These requirements/specs are all in human language…so an added benefit is that you provide more documentation for your human peers too!