Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: AI innovation outside big tech is over. Debate me
4 points by carlosfm 66 days ago | hide | past | favorite | 7 comments
Alright, I know, the title is on the "click bait" side. But hear me out.

Researchers in the UK rely on High Performance Computing systems to do their work. If you combine the top 8 HPCs in the country, they amount to 1,000 GPUs altogether. For context Meta alone used 24,000 just to train one of its models (LLaMa). To make things worse, those HPCs are geting to the end of their life without plans for renewal.

With the growing size of AI models (both in vision and language) and the slow pace of hardware improvement, this is only going to get worse. Meta, xAI, OpenAI know this and they are hoarding resources (hundreds of thousands).

How are researchers, independent developers and AI/MML hobbyists going to compete in the space? Do we just submit to our techno-feudal lords and pay rent to access crumbs? Is it over?

Does anyone else care about this?

I'd love to hear people's views on this. Disclaimer: I am working on a solution and I am reaching out to the community to learn more. Check it out if you are interested: https://github.com/kalavai-net/kalavai-client




I'm relatively optimistic, on the grounds that:

1. Diminishing returns on a model's capability with respect to scale means that, even if big tech datacenters grow at a faster rate than infrastructure available to individual researchers, the gap in performance between their models will narrow - with big tech needing to invest increasing resources even just to gain a shrinking lead

2. Many applications of AI don't need the latest massive LLMs. Some defect detector may reach 99.9% accuracy at which point further work gives negligible improvement, and the hardware cost to reach that point is steadily decreasing - putting it in range of more individuals/small companies/etc.


If by "AI" you mean genAI, then most of the energy around it is inside big tech. However, if you mean "AI" in general, I don't think that's really the case.


I tend to agree, with two caveats: - leaving genAI to just a few, given that is demonstrably the biggest leap in performance around perception, is arguably bad enough - worse even when we consider the funding sink that genAI has in the field -sucking funding for almost anything else


Totally agree with you.

But do we really need AI?


The bar for things happening in the world isn't "do we really need this", it never has been. It's more like "does anyone have any incentive to do this", and with AI plenty of companies and people do.


I think like with any technology, it’s not so much “we need it” but “who has access to it”. Unless you consider genAI totally useless (which is a stretch) effective access is an impactful consideration, society-wise


> Do we just submit to our techno-feudal lords and pay rent to access crumbs? Is it over?

It's too important to give up.

But yeah, the capital inequality in every important field - not just AI - is far more of a problem than the vast majority of people grasp. And the trend isn't looking great either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: