At least they are honest about it in the specs that they have published - there's a graph there that clearly shows their server-side model underperforming GPT-4. A refreshing change from the usual "we trained a 7B model and it's almost as good as GPT-4 in tests" hype train.
Yea, their models are more targeted. You can't ask Apple Intelligence/Siri about random celebrities or cocktail recipes.
But you CAN ask it to show you all pictures you took of your kids during your vacation to Cabo in 2023 and it'll find them for you.
The model "underperforms", but not in the ways that matter. This is why they partnered with OpenAI, to get the generic stuff included when people need it.
Yeah, but Apple wouldn’t care either way. They do things for the principle of it. “We have an ongoing beef with NVIDIA so we’ll build our own ai server farms.”
Apple have a long antagonist relationship with NVIDIA. If anything it is holding Apple back because they don’t want to go cap in hand to NVIDIA and say “please sir, can I have some more”.
We see this play out with the ChatGPT integration. Rather than hosting GPT-4o themselves, OpenAI are. Apple is providing NVIDIA powered AI models through a third party, somewhat undermining the privacy first argument.
Not really. They use ChatGPT as a last resort for a question that isn't related to the device or an Apple-related interaction. Ex: "Make a recipe out of the foods in this image" versus "how far away is my mom from the lunch spot she told me about". And in that instance they ask the user explicitly whether they want to use ChatGPT.