What I want to know is how to become an individual that is able to obtain the kind of things that provide leverage. I already have the machines, I just don't have the platform to move around exciting events and I simply stay in my room. I don't even have a bank account afaik. My paypal account insists on using chinese language (it's cursed). I'm waiting for github to implement a currency I can use. Then again, maybe there's nothing I would do differently.
I would recommend to look at local hacker spaces and just befriend people there. Chat around, listen to others, show interest. That’s a good starting point
I'm also learning. The models get more accurate when they have more parameters, say 7b (7 billion parameters) vs 8x7b (56 billion parameters). They also take more time and resources at higher parameters. TheBloke at Huggingface uploads quantized models, which means they can run on lower spec computers but with a possible hit on quality, he offers multiple configurations per model depending on what you prefer. Big models can be too heavy and slow, the sweet spot is probably something like 13b. You can try different gguf models with this program: https://github.com/madprops/meltdown
Thank you for this. It's a good start. But also, there is so much on Hugging Face, how does anyone evaluate it all? It's not possible to personally test everything, and develop good intuitions about what might be useful in particular situations.
As an analogy, 10 years ago we had a lot of debates on Hacker News about various languages and frameworks: PHP versus Python and Rails versus Django. But where do I go nowadays for similar discussions about all of the tooling that is springing up around the AI and LLM and NLP space?
First thing I tried was checking if I could use it to share images. Which would be a nice way to organize what I share in folders. But apparently it can only be opened inside Puter itself, and it asks the user if they want to download it.
I'm aware not all websites would be cleanly cached, but a lot of text-based pages can cached and be useful, the HTML is already rendered, why waste it? For instance all HN discussions can be cached easily.
Well in order to really know, we'd need to know exactly what processes get triggered at GitHub after a commit. Maybe it has certain triggers like update databases, re-calculate information, etc. So I'm asking if someone who knows about the insides of GitHub could simply answer if it's "cheap and efficient enough" or "every commit is kinda expensive".
In Factorio it would be like modifying the whole system to focus on producing a specific science pack quickly. The UAE would be very kind indeed... Also probably a bluff.
I would not. I would use Linux and its ecosystem as a base. Use a very hackable window manager that allows me to customize the window management and overall experience to my liking. Write a collection of scripts and programs that allow me to do things faster. Maybe release a distro with all of this together ready to use.