The future of AI is specialization, not just achieving benevolent knowledge as fast as we can at the expense of everything and everyone along the way. I appreciate and applaud this approach. I am looking into a similar product myself. Good stuff.
Ironically that was also the past of AI. In 2016 it was all about specialized models (not just training data, everything including architecture and model class/type) for specific tasks and that's the way things had been for a long time.
Are you suggesting that it's an aberration that from ~2019 to ~2026 the AI field has been working on general intelligence (I assume this is what you mean by "achieving benevolent knowledge")?
Personally I think it's remarkable how much a simple transformer model can do when scaled up in size. LLMs are an incredible feat of generalization. I don't see why the trajectory should change back towards specialization now.
To be more specific, I think the future is local and specialized. IBM among others thought the same way with their giant mainframe centralized computers and the original way people would utilize software in the 70s. It's an interesting parallel to today's cloud if you think about it. It's just not scalable from a resource (hardware), energy, and cost perspective. I think we're living a unique time, but it's going to change. Without continued massive funding and a pivot to sustainable, things will (and should) change.
Don't get me wrong, general intelligence will always be important and should be a part of specialist models to a degree for understanding, but it doesn't make sense to use an 800B+ parameter model to help write an email or do research on company trends. Hell, look at what China has been able to do. Qwen 3.5 9B, exceeds Claude 3.5 Haiku and nears Sonnet 3.5 levels. The 27B variation of Qwen 3.5 is superior to both in many ways and even rivals newer models. There is obviously an inherit lag behind, but we will gradually see a shift as these models become more capable.
Right now we are chasing 1-2% improvements at the cost of billions. Local are already absurdly capable (more and more by the day - same with cloud ofcourse) and smarter than most people in specific areas. To do most jobs, can we honestly say it requires a PhD or higher level understanding to perform? We're chasing something that is becoming more and more not needed from a general day to day perspective. AGI is outstanding, but not practical (at least today). I think we'll get there anyway at our current trajectory (though dangerous), but I suspect things will shift.
Definitely is happening. A good problem for them in a way if it's due to adoption, but you'd think with all the money they pull and with how much experience they've had so far keeping things relatively stable it would be better than it is.
I ran 5.4 Pro on some data analytics (admittedly it was 300+ pages). It took forever. Ran the same on Sonnet 4.6, night and day difference. I understand it's like using a V8 engine for a V4 task, but I was curious. These new models look promising though. I'd rather use something like a Haiku most of the time over the best rated. I'm not a rocket scientist or solving the mysteries of the universe. They seem to do a great job 80% of the time.
I think honestly this has helped more than hurt Anthropic from a PR and marketing perspective. Sure it was a huge contract no doubt and obviously the government knows how good Claude is, but I applaud them for sticking to their guns. Despite Anthropic making some questionable choices (for example even though they announced it, saying that they will start training on user data starting last year unless you explicitly opt out was a bit out if left field for them among other things), it must have been some crazy stuff they were asking to do.
It's all a bit hyped up for the media though. It's like saying a rapist is good but a murder is bad. Both are bad, you can argue either way but ultimiately both OpenAI, Anthropic and likely Google will enable/disable whatever systems to allow killing humans if it means they get a big check from the US Gov.
Anthropic has let its system kill humans, although it happened in a roundabout way which in my opinion, doesn't dissolve their responsibility.
"A computer can never be held accountable, therefore a computer must never make a management decision".
The fact we are drifting away from this every day scares me.
I understand what you're saying, it's incredible technology and I use it everyday in my work, but we are too busy racing to one up each other and ignoring the critical safety components, which is extremely dangerous and irresponsible. Even Anthropic to your point, was supposed to be the safer AI company when they started out, but they continue to move away from that path slowly. The problem is the cat's out of the bag and people won't stop now unless something terrible happens. Question is, how bad of an event will it be?
It's an interesting problem that even though it's represented by just you as a single person, I think this is shared across the board with larger corporations at scale. I know for example they were seeing this with game devs in regards to the Godot engine. So many people were uploading work done by AI that has been unverified that people just can't keep up with it. And maybe some of it's good, but how do you vet all the crap out? No one knows what's being written anymore (and non-devs can code now too, which is amazing, but part of the problem that we introduced). I think in the future of being a developer will be more about verifying code integrity and working with AI to ensure it is meeting said standards. Rather than actually being in the driver's seat. Not sexy, but we're handing the keys over willingly, yet, AI is only interpreting the intent. It's going to get things wrong no matter what we do.
This is how Skynet really starts.. :) Very strange to have a company for AI to discuss things openly. Would never have imagined something like this. But here we are!
This makes me think of something that I see in the game dev space. People using AI to code and being ok with that, but strictly are against the art or music aspect of it being AI (more and more AI allows us to sit in the director's chair). People thought the same thing about Photoshop back in the day or how about auto-key framing with 3D animation instead of doing everything by hand? Or even with classic cel drawing being replaced by 3D; I remember when everyone viewed 3D animation as a cheat and not a real form of art.
Change can be very difficult, but it's here to stay whether we like it or not (with all the good and the bad that comes with it). One way or the other AI is a tool, it's unlike anything thought possible, but still a tool nonetheless. Sure, you can tell it to make generic garbage all over the place, but I would argue that a human guiding the AI working together can produce content that is truly something spectacular. This isn't to take away from how things were done before. We can/should respect the past and learn from it, but we always need to continue to move forward.
reply