Hacker News new | past | comments | ask | show | jobs | submit login

Summary: It's cheaper, safer for handling sensitive data, easier to reproduce results (only way to be 100% sure it's reproduce even, as "external" models can change anytime), higher degree of customization, no internet connectivity requirements, more efficient, more flexible.



No ridiculous prohibitions on training on logs…

Man, imagine being OpenAI and flushing your brand down the toilet with an explicit customer noncompete rule which totally backfires and inspires 100x more competition than it prevents


Llama's license does forbid it:

"Llama 3.1 materials or outputs cannot be used to improve or train any other large language models outside of the Llama family."

https://llamaimodel.com/commercial-use/


I'm not sure why anybody would respect that licence term, given the whole field rests on the rapacious misappropriation of other people's intellectual property.


Meta dropped that term, actually, and that's an unofficial website.


It's still present in the llama license...?

https://ai.meta.com/llama/license/

Section 1.b.iv


Llama 3.1 isn't under that license, it's under the Llama 3.1 Community License Agreement: https://www.llama.com/llama3_1/license/


>If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name.

The official llama 3 repo still says this, which is a different phrasing but effectively equal in meaning to what the commenter above said.


An AI chip on laptop devices would be amazing!


It's pretty much happening already. Apple devices have MPS. Both new Intels and Snapdragon X have some form of NPU.


It would be great if any NPU that currently exists was any good at LLM acceleration, but they all have really bad memory bottlenecks.


They already exist. Nvidia GPUs on laptops, M series CPUs from Apple, NPUs...


oh damn guess i am so uninformed


First NPU arrived 7 years ago in an iPhone SoC. GPUs are also “AI” chips.

Local LLM community has been using Apple Silicon Mac GPUs to do inference.

I’m sure Apple Intelligence uses the NPU and maybe the GPU sometimes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: