Hacker News new | past | comments | ask | show | jobs | submit login

How to create my own LLM?

Step 1: get a billion dollars.

That’s your main trade secret.




What is inherent about AIs that requires spending a billion dollars?

Humans learn a lot of things from very little input. Seems to me there's no reason, in principle, that AIs could not do the same. We just haven't figured out how to build them yet.

What we have right now, with LLMs, is a very crude brute-force method. That suggests to me that we really don't understand how cognition works, and much of this brute computation is actually unnecessary.


Maybe not $1 billion, but you'd want quite a few million.

According to [1] a 70B model needs $1.7 million of GPU time.

And when you spend that - you don't know if your model will be a damp squib like Bard's original release. Or if you've scraped the wrong stuff from the internet, and you'll get shitty results because you didn't train on a million pirated ebooks. Or if your competitors have a multimodal model, and you really ought to be training on images too.

So you'd want to be ready to spend $1.7 million more than once.

You'll also probably want $$$$ to pay a bunch of humans to choose between responses for human feedback to fine-tune the results. And you can't use the cheapest workers for that, if you need great english language skills and want them to evaluate long responses.

And if you become successful, maybe you'll also want $$$$ for lawyers after you trained on all those pirated ebooks.

And of course you'll need employees - the kind of employees who are very much in demand right now.

You might not need billions, but $10M would be a shoestring budget.

[1] https://twitter.com/moinnadeem/status/1681371166999707648


And when you spend that - you don't know if your model will be a damp squib like Bard's original release. Or if you've scraped the wrong stuff from the internet, and you'll get shitty results because you didn't train on a million pirated ebooks.

This just screams to me that we don’t have a clue what we’re doing. We know how to build various model architectures and train them, but if we can’t even roughly predict how they’ll perform then that really says a lot about our lack of understanding.

Most of the people replying to my original comment seem to have dropped the “in principle” qualifier when interpreting my remarks. That’s quite frustrating because it changes the whole meaning of my comment. I think the answer is that there isn’t anything in principle stopping us from cheaply training powerful AIs. We just don’t know how to do it at this point.


>Humans learn a lot of things from very little input

And also takes 8 hours of sleep per day, and are mostly worthless for the first 18 years. Oh, also they may tell you to fuck off while they go on a 3000 mile nature walk for 2 years because they like the idea of free love better.

Knowing how birds fly ready doesn't make a useful aircraft that can carry 50 tons of supplies, or one that can go over the speed of sound.

This is the power of machines and bacteria. Throwing massive numbers at the problem. Being able to solve problems of cognition by throwing 1GW of power at it will absolutely solve the problem of how our brain does it with 20 watts in a faster period of time.


If we knew how to build humans for cheap, then it wouldn't require spending a billion dollars. Your reasoning is circular.

It's precisely because we don't know how to build these LLMs cheaply that one must so spend so much money to build them.


The point is that it's not inherently necessary to spend a billion dollars. We just haven't figured it out yet, and it's not due to trade secrets.

Transistors used to cost a billion times more than they do now [1]. Do you have any reason to suspect AIs to be different?

[1] https://spectrum.ieee.org/how-much-did-early-transistors-cos...


> Transistors used to cost a billion times more than they do now

However you would still need billions of dollars if you want state of the art chips today, say 3nm.

Similarly, LLM may at some point not require a billion dollars, you may be able to get one, on par or surpass GPT4, easily for cheap. The state of the art AI will still require substantial investment.


Because that billion dollars gets you the R&D to know how to do it?

The original point was that an “AI” might become so advanced that it would be able to describe how to create a brain on a chip. This is flawed for two main reasons.

1. The models we have today aren’t able to do this. We are able to model existing patterns fairly well but making new discoveries is still out of reach.

2. Any company capable of creating a model which had singularity-like properties would discover them first, simply by virtue of the fact that they have first access. Then they would use their superior resources to write the algorithm and train the next-gen model before you even procured your first H100.


I agree about training time, but bear in mind LLMs like GPT4 and Mistral also have noisy recall of vastly more written knowledge than any human can read in their lifetime, and this is one of the features people like about LLMs.

You can't replace those types of LLM with a human, the same way you can't replace Google Search (or GitHub Search) with a human.

Acquiring and preparing that data may end up being the most expensive part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: