Hacker News new | past | comments | ask | show | jobs | submit login

Since when are military spooks and political opportunists better at deciding on our technological future than startups and corporations? The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling. How would you ensure all government actors are sticking to the same safety standards rather than seizing power by implementing AI hastily? This problem has long been known as quis custodiet ipsos custodes - "who guards the guards themselves?".



> The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling.

It's not that bad given the compute requirements for training even the basic LLMs we have today.

But yes, it's a long shot.


Training a SOTA model is expensive, but you only need to do it once, and fine-tune a thousand times for various purposes.

And it's not even that expensive when compared to the cost of building other large scale projects. How much is a dam, or a subway station? There are also corporations who would profit from making models widely available, such as chip makers, they would commoditise the complement.

Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

This is not make belief. A few recent fine-tunes of Mistral-7B for example are excellent quality, and run surprisingly fast on a 5 year old GPU - 40T/s. I foresee a new era of grassroots empowerment and privacy.

In a few years we will have more powerful phones and laptops, with specialised LLM chips, better pre-trained models and better fine-tuning datasets distilled from SOTA models of the day. We might have good enough AI on our terms.


> Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

Hence the idea to ban development of more capable models.

(We're really pretty lucky that LLM based AGI might be the first type of AGI made, it seems much lower risk and lower power than some of the other possibilties)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: