Hacker News new | past | comments | ask | show | jobs | submit | TheRoque's comments login

Why isn't there a comparison with the Llama3 8b in the "benchmarks" ?

The Llama 3 license says:

"If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights."

IANAL but my read of this is that Apple's not allowed to use Llama 3 at all, for any purposes, including comparisons.


I think it’s fair to leave it out in the on-device model comparison. 3b is much smaller than 8b, it is obviously not going to be as good as llama 3 if they did not make groundbreaking advancements with the technology.

I believe it is because llama 3 8B beats it, which would make it look bad. The phi-3-mini version they used is the 4k which is 3.8B, while LLama 3 8B would be more comparable to phi-3 small (7B) which also considerably better than phi-3-mini. Likely both phi-3 small and llama 3 8B had too good results in comparison to Apple's to be added, since they did add other 7B models for comparison, but only when they won.

llama 3 definitely beats it, but 99% of the users wont care which is actually a good thing... apple totally wins the ai market not by being sota but by sheer amount of devices which will be running their models, we're talking billions

Maybe it’s too new for them to have had time to include it in their studies?

Phi-3-Mini, which is in the benchmarks, was released after Llama3 8b

Llama 3 8B is really really good. Maybe it makes Apples models look bad? Or it could be a licensing thing where Apple can’t use Llama 3 at all, even just for benchmarking and comparison.

The license for the Llama models was basically designed to stop Apple, Microsoft and Google from using it.


In their defense, I find SQLAlchemy syntax quite horrible, and I always have to look up everything. It also got a 2.0 release recently which changes some syntax (good luck guessing which version ChatGPT will use), and makes the process even more annoying.

SQLAlchemy syntax is ridiculously obvious and straightforward as long as you're not doing anything weird.

The takeaway here is that they weren't mature enough to realize they were, in fact, doing something "weird". I.e. Using UUIDs for PKs, because hey "Netflix does it so we have to too! Oh and we need an engineering blog to advertise it".

Edit. More clarity about why the UUID is my point of blame: If they had used a surrogate and sequential Integer PK for their tables, they would never have to tell SQLAlchemy what the default would be, it's implied because it's the standard and non-weird behavior that doesn't include a footgun.


Unfortunately, UUID as PK is an extremely common pattern these days, because devs love to believe that they’ll need a distributed DB, and also that you can’t possibly use integers with such a setup. The former is rarely true, the latter is blatantly false.

I have been trying to be an aUtoDidAct for the last 5 years and I seriously failed. My technical skills are stagnating. My social skills deteriorated. My network is abysmal. The value of working in team with others, and being able to talk to experts is really invaluable when it comes to learning. Some stuff can't be found in books nor on internet, and you can't learn it by yourself unless you spend 10 years on the topic.

So, for me at least, I think being an autodidact is not the best option when it comes to learning.

I think some people are really creative and curious, and benefit from having more freedom, but most people need guidance. We shouldn't push the message than anyone can learn anything by himself, even though I wish it were true.


There's a whole subreddit dedicated to this, r/StallmanWasRight

Ha, Ha, but seriously; Stallman has been right from the very beginning and people have forgotten what the movement he started has delivered to the World in terms of egalitarian access to Computers and Software. Much of the developing world would have been shut out of this revolution if it weren't for his uncompromising and forceful viewpoints.

The worst is having random people questioning your expertise because of what ChatGPT told them.

To be fair, people did this before ChatGPT. It's just the thing they point to as evidence now, and they'll always find something. The underlying problem is much bigger:

1) people confidently arguing with domain experts about topics that they have little to no experience in.

2) people valuing the opinions of arguers from 1 over experts.


To be extra fair, "domain experts" in some areas have had a bad few years; there are a couple of fields I can think of off the top of my head where the "experts" wheeled out to advise/scare the public are clearly more influenced by politics (or saving their own skin) than science. Replacing trust in experts with trust in LLMs is obviously dumb, but who is Joe Sixpack supposed to turn to?

> there are a couple of fields I can think of off the top of my head where the "experts" wheeled out to advise/scare the public are clearly more influenced by politics (or saving their own skin) than science

This feels like a thinly veiled jab at COVID era public health recommendations. Can you be more clear about which fields you’re referring to?


"domain experts" are often totally wrong and there is nothing new about this.

When our state of knowledge of the world changes , "domain experts" have the most to lose and our state of knowledge of the world is constantly changing.

Most domains also don't have the exactness of a programming language so are exposed to the same human processes as displayed in a middle school popularity contest.

The whole concept of the "domain expert" is really a modern superstition. An especially powerful superstition because it is the superstition of those who believe themselves beyond superstition.


I'm not sure which domains you're referring to.

I can think of domains where sensationalist opinions are lifted, but not ones where the general consensus is blatantly false. I can think of plenty of instances where large news organizations have grossly misrepresented conclusions of research.

> but who is Joe Sixpack supposed to turn to?

This, I agree with. It is why I actively voice dissent, as an expert and in areas where I have domain expertise, against so-called science communicators (not all are "so-called") and when the news gets it wrong.

Hell, I'll do this when actual science communicators get it wrong. Like when Niel DeGrassee Tyson is being dumb[0]. He also thinks hydrogen bombs don't have fallout...[1]. They do...

That said, I still don't think this is a reason to distrust scientists. But I think it is important for scientists to speak out when communicators get it wrong. I think this is a common problem and allows the conmen to gain power. But that's not the only force at play. Truth is complex. Approximate truth is bounded in complexity. But lies can be infinitely simple. So we get it wrong when we "reason our way through" something, because typically the base assumptions are wrong. This makes many conmen truly believe the lies that they are selling.

Joe Sixpack can reason through that. But Joe Sixpack can also reason through the concept that if he was easily able to reason through something and that experts disagree, it's pretty likely there's a reason why other than them being dumb and <Joe Sixpack> knowing better. Can, but doesn't. And we as the public let that happen. This may seem like an insurmountable problem, but instead it is a problem which just needs sufficient effort. Momentum builds, so the more people that push against this, the more common it'll become. And to be clear, it is perfectly fine to question experts. It is not perfectly fine to confidently disagree while not actually understanding the topic. If you don't know the difference, read a few papers/works in the topic and see if you can understand 90+% of it (if it is CS or Engineering, see if you can replicate).

[0] https://www.youtube.com/shorts/a-PHXGmexxM

[1] https://www.youtube.com/watch?v=QGa4ItIOCRg


Doctors had this moment when Google first came out

To be fair, I came across doctors who are no better than a static webpage from the CDC. I fire those doctors pretty quickly.

Seems like a weird take. To me it's just another abstraction level, could be useful for quick PoC or hobby website, could be unusable for a backend that wants larger features. Either way, where do you draw the line between useful abstraction and "de-skilling" ? Like, should I craft by hand all my docker containers, and is the quick config file "deskilling" me ?

> However, their no-code approach generates awful code difficult to follow, making apps less reliable. Moreover, using a UI is slower than coding, especially now that AI assistants are here to help you.

I don't get this one, the sentence "their no-code approach generates awful code". If it's no-code, why do you care about code ?

Also, in what way are you relying on "coding" ? No information on this on the front page, to me it seems just like a config file only. Are you saying that the config only generates the boilerplate that the user will modify afterwards ? If that's the case, it's really not obvious.


Hi there ! Dev here, thanks for the feedback. Yes the example in the homepage (https://manifest.build/) generates the DB, Admin panel and REST API, we will adapt if it is not clear.

You asked an interesting question: "should you care about code quality if you use no-code tools?": As a developer, I would say definitely say yes because not understanding your own code (or your team's code) will soon or late lead to issues. If you work on a team with PR validations or similar, how can you validate your teammates' code if the code is unreadable ?


(Supabase team member)

Congrats on shipping, this looks nice and well thought-out

One possible correction: the only code we generate for a user is either SQL migrations or TS types (if devs want to use the TS client). I’m not sure many would classify Supabase as NoCode, and we strongly recommend users use CI/CD development with our CLI and database migrations

https://supabase.com/docs/guides/cli/local-development#datab...


I'm really curious about AI and the terminal. Do you have good tools to recommend ?

Sadly yeah. I managed to install it both on my mom and dad's computers. I didn't push them, they were genuinely curious and sick of having to remember their Microsoft account to do some stuff on windows. They have been running Linux Mint for 2 years and quite happy about it. But I hope they don't constraint themselves due to this, I don't watch their everyday use, but I know that Linux is not always easy on the non tech savy people.

First it's gonna be an obvious setting, then it's gonna be a setting deep in the "features" of windows, then it's gonna be a register you have to edit, then it's gonna be an obscure powershell script you have to execute, and in the end, it will be a default feature that you have no control on :)

Don't forget them requiring an online Microsoft account to change the setting. Don't have internet or can't set up an account? Well, tough luck.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: