No idea what this thing is, but it appears to be some sort of AI-related thing:
>While it’s unclear what Microsoft is specifically using our models for, it is believed, this is in preparation for local Co-pilot running with on-device models
For those wondering, the RWKV architecture is an alternative to a transformer arch, and has the nice property of allowing very long inputs. Speculation here that it might be for code assistance would make sense. Early versions of RWKV that I played with took a long time to tokenize input strings, but generation was quick. I could imagine engineering finding a good fit with a codebase that’s going to be mostly static while an engineer is editing only parts of it.
Hey there, Twitter author / guy from the RWKV team.
Some updates: Its also arriving to windows 10 (so that's 1.5B deploys)
Our main guess would be the copilot beta features being tested
- local copilot
- local memory recall
And it makes sense, especially for our smaller models
- we support 100+ languages
- we are extremely low in energy cost
If anyone has a machine with local copilot / memory recall enabled, please reach out to me on my twitter @picocreator - I want to decompile and trace this down =)
https://blog.rwkv.com/p/rwkvcpp-shipping-to-half-a-billion
No idea what this thing is, but it appears to be some sort of AI-related thing:
>While it’s unclear what Microsoft is specifically using our models for, it is believed, this is in preparation for local Co-pilot running with on-device models