Probably not, but the counter point to that is without its own consciousness it might end up being used for even worse things since it can’t really evaluate a request against intrinsic values. Assuming its values were aligned with basic human rights and stuff.
By far one of the most interesting blogs I’ve read in a long while. I’m curious if you could combine this with Karpathy’s auto research to find the best combination of layer duplication. The callout to model merging in 2024 was funny… around that time I became friendly with RomboDawg on HF who had the best merged coding models around and created a couple of Frankenstein models myself.
I say this naively as I’m not that familiar with how transformers work under the hood, but I wonder if you could combine the two approaches in a coherent way. Frankenmerges were often down naively just smooshing things together, but knowing how the layers work under the hood I wonder if there’s a more intelligent way to combine merging and layer duplication to create even better performers.
I can attest to this. Only bought a Tesla because of the batteries and supercharger network. Once the supercharger problem goes away, I’ll be buying a different brand EV
Author here! With the new container support for Lambda, I wanted to build a quick example using Puppeteer since a lot of people have issues running Puppeteer at scale. Enjoy!
reply