Was that Flightfox? If so, I loved using it, helped me save so much money but also time :)
It sounds like there’s a problem with having too many flights that are barely full and hence unprofitable. AFAIK the federal gov spends significant money subsidising many “small airport” routes even if they’re barely used.
That’s just the nature of the beast. Airlines have to align large capital intensive assets with fluctuating passenger demand and fuel prices. And at congested airports the slots are also expensive assets that get auctioned off, and operate on a use it or lose it basis.
Spirit and the other LCC’s problem is that the legacy airlines are now offering a similar product in their basic economy that has less hassle, higher frequency, is sometimes eligible for earnings on their massive loyalty programs, etc.
> The 256 GB/s number is real, but for context, an Apple M5 Ultra hits ~800 GB/s on its unified memory
The M5 Ultra has not been even announced.
This article appears to be predominately or entirely LLM-produced with little to no human review, and contains numerous material and misinforming errors.
It also omits serious contenders that's worth at least comparing, like the DGX Spark.
Eh, you're seeing raw thinking tokens. With Claude <x> 4, and I think GPT-5 series, you are no longer seeing real thinking tokens, but "summarized" tokens that are probably highly different to the raw thinking.
In the horror days of IE, I remember having to look up some DirectX filter to properly display PNG images with transparency. It was that bad, and that’s one example of 1000.
Some libraries/scripts helped normalise things a little, but never enough. Yuck.
Being a web developer was not fun; and the web was absolutely being held back. Chrome did a lot of things right: per-origin sandboxing, properly implementing web standards, V8, developer tools, and back then Chromium was super close to Chrome.
Do I think Chrome is a net-negative for the web over the past ~3-5 years? Yes, especially with manifest v3, “privacy sandbox”, and them basically forcing through web APIs because they have the dominant marketshare.
But early Chrome was a technologically impressive and user-friendly browser that really did make the web massively better.
I remember happily putting Firefox and Chrome mini-banners (what are they called? Those little rectangular images) on my website, for free, because I recommended it.
In practice it’s useful too. The local translation in Firefox is quite good, and I love that I can translate pages entirely on my machine; without the contents going to another server.
As for Apple foundational models, I think the issue is more that they’re just not very intelligent or good; maybe WWDC will change that; but if you want to implement LLM functionality, you’re better off either calling an API, or shipping a better small on device model.
Yeah I looked into the Apple Foundation models and was surprised at their limited scope. On reflection it made sense though. They’re giving you the small part of the LLM capability surface that (1) can run with good performance on all their hardware and (2) works reliably.
It’s not enough for a chat-first research agent, but it’s definitely enough to unlock features that rely on natural language understanding. Seems like a small thing compared to Claude/ChatGPT and the general hype, but still magic in its own context.
Right and that means people have to send their data to an external service.
Give it X months (or years??) and people will realize this is actually a privacy/data autonomy issue.
It's just dominated right now by the anti-AI/anti-technology sentiment in the west. That will gradually go away as more people use AI and robotics and realize how wrong they were about it.
>Right and that means people have to send their data to an external service.
Nothing in this proposal claims it has to be a local AI. That just happens to be the implementation by Chrome and Edge (for now at least, I'd imagine Google will eventually start moving this API towards hosted Gemini).
That's an important aspect of this that should really be part of the discussion on GitHub. But I've been told I'm not qualified to interject so I am not going to bother.
I will use WebLLM if I want something like this (with local AI guaranteed).
So I guess the question would be, "What makes this acceptable Tech". I don't know how you get there without offering some type of "Search" like choice for open models. We all know how that turned out.
Maybe Mozilla can save itself by getting paid to serve Google's model as default rather than another providers. Would replace the revenue stream they lost.
I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.
For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.
For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.
It's significantly slower to use LLMs for some things. The only thing it excels at is generic, broad tasks. Getting the 90% done. I find that it's less cumbersome to get it mostly right and touch it up yourself than to prompt over details like syntax.
Besides what the other person mentioned about being more useful for enterprise, I also heard mentioned on a podcast that gpt-image-2 uses the same general architecture as the LLM models, while Sora was a very different architecture. So they don't need two different sets of everything by shutting down Sora.
Image generation would seem like it would have many more enterprise use cases (particularly marketing, but essentially all the business uses of Photoshop) than video.
It sounds like there’s a problem with having too many flights that are barely full and hence unprofitable. AFAIK the federal gov spends significant money subsidising many “small airport” routes even if they’re barely used.
reply