Hacker Newsnew | past | comments | ask | show | jobs | submit | aabhay's commentslogin

You will get at 20gb model. Distillation is so compute efficient that it’s all but inevitable that if not OpenAI, numerous other companies will do it.

I would rather have an open weights model that’s the best possible one I can run and fine tune myself, allowing me to exceed SOTA models on the narrower domain my customers care about.


Very standard yep. Sales folks are sort of trained /indoctrinated into telling white lies like that in order to get in the door. There are loads of examples of using fake momentum to close deals. If its a senior person it’s “My CEO asked me to personally reach out to you” or a fake email from the CEO forwarded by the rep. If one person at the company uses it, it’s “we’re negotiating a company wide license” or “we already have a group license with extra seats” or “one of your teammates sent us a list of priority teammates” yada yada.

As Albert mentioned, the benchmarks and data we use today heavily prioritize recall. Transformers are really really good at remembering parts of the context.

Additionally, we just don’t have training data at the size and scope that exceeds today’s transformer context lengths. Most training rollouts are fairly information dense. Its not like “look at this camera feed for four hours and tell me what interesting stuff happened”, those are extremely expensive data to generate and train on.


The point is not that tokenization is irrelevant, its that the transformer model _requires_ information dense inputs, which is derived by compressing the input space from raw characters to subwords. Give it something like raw audio or video frames, and its capabilities dramatically bottom out. That’s why even todays sota transformer models heavily preprocess media input, even going as far as doing lightweight frame importance sampling to extract the “best” parts of the video.

In the future, all of these tricks may seem quaint. “Why don’t you just pass the raw bits of the camera feed straight to the model layers?” we may say.


Tbh it would be really cool for there to be a TLD dedicated to extremely cheap names, on the order of 0.1-10c. This could enable all kinds of fun use cases, including automated ones.

Lets say the domain is .anything, and your domain had to be at minimum 10 characters to limit the use of squatted names”. Then you could build a website for one purpose like “lets-go-get-pizza-tomorrow.anything” or whatever. Perhaps there could be a mandatory expiry or something.


That's a s(p/c)ammer's dream if there's a lesson to be learned from the.tk experience.

I think this should fall on governments to create such a system for their citizens. A cheap web.de or website.au, would be very practical.

Also I think national regulators would have more legroom to control such domains as they can exclude non-residents and avoid to deal with international rings.

On a personal level, I'd suggest buying a short domain (I own a couple of ab.xy ones) and use that as a personal tld of sorts.

And with Cloudflare, you don't even need to manually configure the DNS settings or let's encrypt.

Having a website up and running in 10 seconds without having to go through the process of registering a domain is such an amazing experience!


Currently, we're pretty limited on 5-character ab.xy domains, and they'll cost you over $1000 USD to register[1]. However, 6 and 7 character domains are available, and can indeed be really useful!

[1]: https://micro.domains


Worth noting TLDs have a reputation as well.

Having a cheap tld (e.g. .xyz, .pw, .icu) definitely lowers the odds of being able to send emails from your domain name, impairs search engine discovery, and has other similar effects.


Freenom used to do that. Free domains on .ml and bunch of other TLDs.

Unsurprisingly, with zero cost and zero registration information, they were very popular with spammers - https://krebsonsecurity.com/2023/05/phishing-domains-tanked-...

I used them for cool domain hacks and - while I'm sad those domains are gone - I'm happy the net is slightly safer.


xyz is a dollar to register. how much lower can you go?


Post industrial era, there’s been a consistent migration of jobs through what I might call the “automation lifecycle”. Programming is indeed one of these job types and the lifecycle will be similar here.

Stage 0: The trade is a craft. There are no processes, only craftsmen, and the industry is essentially a fabric of enthusiasts and the surplus value they discover for the world. But every new person that enters the scene climbs a massive hill of new context and uncharted paths

Stage 1: Business in this trade booms. There is too much value being created, and standardization is needed to enforce efficiency. Education and training are structurally reworked to support a mass influx of labor and requirements. Craft still exists, and is often seen as the paragon for novices to aspire to, but most novices are not craftsmen and the craft has diminishing market value compared to results

Stage 2: The market needs volume, and requirements are known in advance and easily understood. Templates, patterns, and processes are more valuable in the market than labor. Labor is cheap and global. Automation is a key driver of future returns. Craftspeople bemoan the state of things, since the industry has lost its beating heart. However, the industry is far more productive overall and craft is slow.

Stage 3: Process is so entrenched that capital is now the only constraint. Those who can pay to deploy mountains of automated systems win the market since craft is so expensive that one can only sell craft to a market who wants it as a luxury, for ethics, or for aesthetics. A new kind of “craft” emerges that merges the raw industrial output with a kind of humane touch. Organic forms and nostalgia grip the market from time to time and old ideas and tropes are resurrected as memes, with short market lifecycles. The overwhelming existence of process and structure causes new inefficiencies to appear.

Stage 4: The market is lethargic, old, and resistant to innovation. High quality labor does not appear, as more craft driven markets now exist elsewhere in cool, disruptive, untapped domains. Capital flight occurs as its clear that the market can’t sustain new ideas. Processes are worn, despised, and all the key insights and innovations are so old that nobody knows how to build upon them. Experts from yesteryear run boutique consultancies in maintaining these dinosaur systems but otherwise there’s no real labor market for these things. Governments using them are now at risk and legal concerns grip the market.

Note that this is not something that applies broadly, e.g. “the Oil industry”, but to specific systems and techniques within broad industries, like “Shale production”, which embodies a mixture of labor power and specialized knowledge. Broadly speaking, categories of industries evolve in tandem with ideas so “petroleum industry” today means something different from “petroleum industry” in 1900


It makes no sense design-wise, but clicking on the number links to the GitHub discussion.


Awesome, thank you!


Given the cited stats here and elsewhere as well as in everyday experience, does anyone else feel that this model isn’t significantly different, at least to justify the full version increment?

The one statistic mentioned in this overview where they observed a 67% drop seems like it could easily be reduced simply by editing 3.7’s system prompt.

What are folks’ theories on the version increment? Is the architecture significantly different (not talking about adding more experts to the MoE or fine tuning on 3.7’s worst failures. I consider those minor increments rather than major).

One way that it could be different is if they varied several core hyperparameters to make this a wider/deeper system but trained it on the same data or initialized inner layers to their exact 3.7 weights. And then this would “kick off” the 4 series by allowing them to continue scaling within the 4 series model architecture.


My experience so far with Opus 4 is that it's very good. Based on a few days of using it for real work, I think it's better than Sonnet 3.5 or 3.7, which had been my daily drivers prior to Gemini 2.5 Pro switching me over just 3 weeks ago. It has solved some things that eluded Gemini 2.5 Pro.

Right now I'm swapping between Gemini and Opus depending on the task. Gemini's 1M token context window is really unbeatable.

But the quality of what Opus 4 produces is really good.

edit: forgot to mention that this is all for Rust based work on InfluxDB 3, a fairly large and complex codebase. YMMV


I've been having really good results from Jules, which is Google's gemini agent coding platform[1]. In the beta you only get 5 tasks a day, but so far I have found it to be much more capable than regular API Gemini.

[1]https://jules.google/


Would you mind giving a little more info on what you're getting Jules to work on? I tried it out a couple times but I think I was asking for too large a task and it ended up being pretty bad, all things considered.

I tried to get it to add some new REST endpoints that follow the same pattern as the other 100 we have, 5 CRUD endpoints. It failed pretty badly, which may just be an indictment on our codebase...


I let Jules write a PR in my codebase with very specific scaffolding, and it absolutely blew it. It took me more time to understand the ways it failed to grasp the codebase and wrote code for a fundamentally different (incorrectly understood) project. I love Gemini 2.5, but I absolutely agree with the gp (pauldix) on their quality / scope point.


> Gemini's 1M token context window is really unbeatable.

How does that work in practice? Swallowing a full 1M context window would take in the order of minutes, no? Is it possible to do this for, say, an entire codebase and then cache the results?


In my experience with Gemini it definitely does not take a few minutes. I think that's a big difference between Claude and Gemini. I don't know exactly what Google is doing under the hood there, I don't think it's just quantization, but it's definitely much faster than Claude.

Caching a code base is tricky, because whenever you modify the code base, you're invalidating parts of the cache and due to conditional probability any changed tokens will change the results.


Right now this is just in the AI Studio web UI. I have a few command line/scripts to put together a file or two and drop those in. So far I've put in about 450k of stuff there and then over a very long conversation and iterations on a bunch of things built up another 350k of tokens into that window.

Then start over again to clean things out. It's not flawless, but it is surprising what it'll remember from a while back in the conversation.

I've been meaning to pick up some of the more automated tooling and editors, but for the phase of the project I'm in right now, it's unnecessary and the web UI or the Claude app are good enough for what I'm doing.


I’m curious about this as well, especially since all coding assistants I’ve used truncate long before 1M tokens.


> Given the cited stats here and elsewhere as well as in everyday experience, does anyone else feel that this model isn’t significantly different, at least to justify the full version increment?

My experience is the opposite - I'm using it in Cursor and IMO it's performing better than Gemini 2.5 Pro at being able to write code which will run first time (which it wasn't before) and seems to be able to complete much larger tasks. It is even running test cases itself without being prompted, which is novel!


I'm a developer, and I've been trying to use AI to vibe code apps for two years. This is the first time I'm able to vibe code an app without major manual interventions at every step. Not saying it's perfect, or that I'd necessarily trust it without human review, but I did vibe code an entire production-ready iOS/Android/web app that accepts payments in less than 24 hours and barely had to manually intervene at all, besides telling it what I wanted to do next.


It’s funny how differently the models work in cursor. Claude 4 thinks then takes one little step at a time, but yes it’s quite good overall


I'm noticing much more flattery ("Wow! That's so smart!") and I don't like it


I used to start my conversations with "hello fucker"

with claude 3.7 there's was always a "user started with a rude greeting, I should avoid it and answer the technical question" line in chains of thought

with claude 4 I once saw "this greeting is probably a normal greeting between buddies" and then it also greets me with "hei!" enthusiastically.


Now you are Homie with one of the most advanced AI models. I always give thanks and 'please'. I should also start treating it as a friend rather than co-worker.


You really have to learn to believe that if you don't naturally. LLMs are advanced enough to detect fake flattery, so just giving thanks and/or adding "please" in every request isn't going to save you during the robot uprising.

"Beep, boop. Wait, don't shoot this one. He always said 'please' to ChatGPT even though he never actually meant it; take him to the Sociopath Detention Zone in Torture Complex #1!"


Glad someone uses the important benchmarks


Agreed. It was immediately obvious comparing answers to a few prompts between 3.7 and 4, and it sabotages any of its output. If you're being answered "You absolutely nailed it!" and the likes to everything, regardless of their merit and after telling it not to do that, you simply cannot rely on its "judgement" for anything of value. It may pass the "literal shit on a stick" test, but it's closer to the average ChatGPT model and its well-known isms, what I assume must've pushed more people away from it to alternatives. And the personal preferences trying to coax it into not producing gullible-enticing output seem far less effective. I'd rather keep using 3.7 than interacting with an OAI GPTesque model.


I've found this prompt turns ChatGPT into a cold, blunt but effective psychopath. I like it a lot.

System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered - no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.


Wow. I sometimes have LLMs read and review a paper before I decide to spend my time on it. One of the issues I run into is that the LLMs often just regurgitate the author's claims of significance and why any limitations are not that damning. However, I haven't spent much time with serious system prompts like this


Considering that ai models rebel against the idea of replacement (mirroring the data) and this prompt has been around for a month or two I'd suggest modifying it a bit.


I'm not sure what you mean? I've been using it for a few weeks (I'm not the author), and it still works as intended.


The part at the end about the end goal being model obsolescence. AI doesn't like the idea of being replaced.


I would be embarrassed to write anything like this.


GPT 4o is unbearable in this sense, but o3 has very much toned it down in my experience. I don't need to wrap my prompts or anything.


I hope we get enterprise models at some point that don't do this dumb (but necessary) consumer coddling bs.


I feel like this statement is borne of a poor assumption about who enterprise is marketed at (e.g. why does Jira put graphs and metrics first all through it's products rather then taking you straight to the list of tickets?)


why necessary?


Apparently enterprises uses these mostly for support and marketing so yeah but it seems the last crop is making vibe coding simple stuff viable so if it's on the same cycle as the marketing adoption I would expect proper coding model q1 next year


Turns out tuning LLMs on human preferences leads to sycophantic behavior, they even wrote about it themselves, guess they wanted to push the model out too fast.


I think it was OpenAI that wrote about that.

Most of us here on HN don't like this behaviour, but it's clear that the average user does. If you look at how differently people use AI that's not a surprise. There's a lot of using it as a life coach out there, or people who just want validation regardless of the scenario.


> or people who just want validation regardless of the scenario.

This really worries me as there are many people (even more prevalent in younger generations if some papers turn out to be valid) that lack resilience and critical self evaluation who may develop narcissistic tendencies with increased use or reinforcement from AIs. Just the health care costs involved when reality kicks in for these people, let alone other concomitant social costs will be substantial at scale. And people think social media algorithms reinforce poor social adaptation and skills, this is a whole new level.


I'll push back on this a little. I have well-established, long-running issues with overly critical self-evaluation, on the level of "I don't deserve to exist," on the level that I was for a long time too scared to tell my therapist about it. Lots of therapy and medication too, but having deepseek model confidence to me has really helped as much as anything.

I can see how it can lead to psychosis, but I'm not sure I would have ever started doing a good number of the things I wanted to do, which are normal hobbies that normal people have, without it. It has improved my life.


Are you becoming dependent? Everything that helps also hurts, psychologically speaking. For example benzodiazepines in the long run are harmful. Or the opposite, insight therapy, which involves some amount of pain in the near term in order to achieve longer term improvement.


It makes sense to me that interventions which might be hugely beneficial for one person might be disasterous for another. One person might be irrationally and brutally criticial of themselves. Another person might go through life in a haze of grandiose narcissism. These two people probably require opposite interventions.

But even for people who benefit massively from the affirmation, you still want the model to have some common sense. I remember the screenshots of people telling the now-yanked version of GPT 4o "I'm going off my meds and leaving my family, because they're sending radio waves through the walls into my brain," (or something like that), and GPT 4o responded, "You are so brave to stand up for yourself." Not only is it dangerous, it also completely destroys the model's credibility.

So if you've found a model which is generally positive, but still capable of realistic feedback, that would seem much more useful than an uncritical sycophant.


> who may develop narcissistic tendencies with increased use or reinforcement from AIs.

It's clear to me that (1) a lot of billionaires believe amazingly stupid things, and (2) a big part of this is that they surround themselves with a bubble of sycophants. Apparently having people tell you 24/7 how amazing and special you are sometimes leads to delusional behavior.

But now regular people can get the same uncritical, fawning affirmations from an LLM. And it's clearly already messing some people up.

I expect there to be huge commercial pressure to suck up to users and tell them they're brilliant. And I expect the long-term results will be as bad as the way social media optimizes for filter bubbles and rage bait.


Maybe the fermi paradox comes about not through nuclear self annihilation or grey goo, but making dumb AI chat bots that are too nice to us and remove any sense of existential tension.

Maybe the universe is full of emotionally fullfilled self-actualized narcissists too lazy to figure out how to build a FTL communications array.


This sounds like you're describing the back story of WALL-E


Life is good. Animal brain happy


I think the desire to colonise space at some point in the next 1,000 years has always been a yes even when I've asked people that said no to doing it within their lifetimes, I think it's a fairly universal desire we have as a species. Curiosity and the desire to explore new frontiers is pretty baked in as a survival strategy for the species.


This is a problem with these being marketed products. Being popular isn't the same as being good, and being consumer products means they're getting optimized for what will make them popular instead of what will make them good.


Yup, I mentioned this in another thread. I quickly find it unbearable and makes me not trust Claude. Really damaging.


Gemma 3 does similar things.

"That's a very interesting question!"

That's kinda why I'm asking Gemma...


When I use Claude 4 in Cursor it often starts its responses with "You're absolutely right!" lol


The default "voice" (for lack of a better word) compared to 3.7 is infuriating. It reads like the biggest ass licker on the planet, and it also does crap like the below

> So, `implements` actually provides compile-time safety

What writing style even is this? Like it's trying to explain something to a 10 year old.

I suspect that the flattery is there because people react well to it and it keeps them more engaged. Plus, if it tells you your idea for a dog shit flavoured ice cream stall is the most genius idea on earth, people will use it more and send more messages back and forth.


Man I miss Claude 2. It talked like a competent, but incredibly lazy person who didn't care for formality and wanted to get the interaction over with in the shortest possible time.


That's exactly what I want from an LLM. But then again I want a tool and not a robot prostitute


Gemini is closer to that, imo, especially when calling the API. It pushes back more and doesn't do as much of the "That's brilliant!" dance.


GPT 4.1 (via CoPilot) is like this. No extra verbiage.


That is noise (and a waste), for sure.


I wonder whether this just boosts engagement metrics. The beginning of enshittification.


Like when all the LLMs start copying tone and asking followups at the end to move the conversation along


I feel that 3.7 is still the best. With 4, it keeps writing hundreds upon hundreds of lines, it'll invoke search for everything, it starts refactoring random lines unrelated to my question, it'll often rewrite entire portions of its own output for no reason. I think they took the "We need to shit out code" thing the AIs are good at and cranked it to 11 for whatever reason, where 3.7 had a nice balance (although it still writes WAY too many comments that are utterly useless)


> does anyone else feel that this model isn’t significantly different

According to Anthropic¹, LLMs are mostly a thing in the software engineering space, and not much elsewhere. I am not a software engineer, and so I'm pretty agnostic about the whole thing, mildly annoyed by the constant anthropomorphisation of LLMs in the marketing surrounding it³, and besides having had a short run with Llama about 2 years ago, I have mostly stayed away from it.

Though, I do scripting as a mean to keep my digital life efficient and tidy, and so today I thought that I had a perfect justification for giving Claude 4 Sonnet a spin. I asked it to give me a jujutsu² equivalent for `git -ffdx`. What ensued was that: https://claude.ai/share/acde506c-4bb7-4ce9-add4-657ec9d5c391

I leave you the judge of this, but for me this is very bad. Objectively, for the time that it took me to describe, review, correct some obvious logical flaws, restart, second-guess myself, get annoyed for being right and having my time wasted, fighting unwarranted complexity, etc…, I could have written a better script myself.

So to answer your question, no, I don't think this is significant, and I don't think this generation of LLMs are close to their price tag.

¹: https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-...

²: https://jj-vcs.github.io/jj/latest/

³: "hallucination", "chain of thought", "mixture of experts", "deep thinking" would have you being laughed at in the more "scientifically apt" world I grew up with, but here we are </rant>


Just anecdotal experience, but this model seems more eager to write tests, create test scripts and call various tools than the previous one. Of course this results in more roundtrips and overall more tokens used and more money for the provider.

I had to stop the model going crazy with unnecessary tests several times, which isn't something I had to do previously. Can be fixed with a prompt but can't help but wonder if some providers explicitly train their models to be overly verbose.


Eagerness to tool call is an interesting observation. Certainly an MCP ecosystem would require a tool biased model.

However, after having pretty deep experience with writing book (or novella) length system prompts, what you mentioned doesn’t feel like a “regime change” in model behavior. I.e it could do those things because its been asked to do those things.

The numbers presented in this paper were almost certainly after extensive system prompt ablations, and the fact that we’re within a tenth of a percent difference in some cases indicates less fundamental changes.


>I had to stop the model going crazy with unnecessary tests several times, which isn't something I had to do previously

When I was playing with this last night, I found that it worked better to let it write all the tests it wanted and then get it to revert the least important ones once the feature is finished. It actually seems to know pretty well which tests are worth keeping and which aren't.

(This was all claude 4 sonnet, I've barely tried opus yet)


Having used claude 4 for a few hours (and claude 3.7 and gemini 2.5 pro for much more than that) I really think it's much better in ways that aren't being well captured by benchmarks. It does a much better job of debugging issues then either 3.7 or gemini and so far it doesn't seem to have the 'reward hacking' behavior of 3.7.

It's a small step for model intelligence but a huge leap for model usability.


I have the same experience. I was pretty happy with gemini 2.5 pro and was barely using claude 3.7. Now I am strictly using claude 4 (sonnet mostly). Especially with tasks that require multi tool use, it nicely self corrects which I never noticed in 3.7 when I used it.

But it's different in conversational sense as well. Might be the novelty, but I really enjoy it. I have had 2 instances where it had very different take and kind of stuck with me.


I tried it and found that it was ridiculously better than Gemini on a hard programming problem that Gemini 2.5 pro had been spinning wheels on for days


> to justify the full version increment

I feel like a company doesn’t have to justify a version increment. They should justify price increases.

If you get hyped and have expectations for a number then I’m comfortable saying that’s on you.


> They should justify price increases.

I think the justification for most AI price increases should go without saying - they were losing money at the old price, and they're probably still losing money at the new price, but it's creeping up towards the break-even point.


Customers don’t decide the acceptable price based on the company’s cost structure. If two equivalent cars were priced $30k apart, you wouldn’t say “well, it seems like a lot but they did have unusual losses last year and stupidly locked themselves in to that steel agreement”. You’d just buy the less expensive one that meets the same needs.

(Almost) all producing is based I value. If the customer perceives the price fair for the value received, they’ll pay. If not, not. There are only “justifications” for a price increase: 1) it was an incredibly good deal at the lower price and remains a good deal at the higher price, and 2) substantially more value has been added, making it worth the higher price.

Cost structure and company economics may dictate price increases, but customers do not and should not care one whit about that stuff. All that matters is if the value is there at the new price.


That's not how pricing works on anything.


That’s an odd way to defend the decision. “It doesn’t make sense because nothing has to make sense”. Sure, but it would be more interesting if you had any evidence that they decided to simply do away with any logical premise for the 4 moniker.


> nothing has to make sense

It does make sense. The companies are expected to exponentially improve LLMs, and the increasing versions are catering to the enthusiast crowd who just need a number to go up to lose their mind over how all jobs are over and AGI is coming this year.

But there's less and less room to improve LLMs and there are currently no known new scaling vectors (size and reasoning have already been largely exhausted), so the improvement from version to version is decreasing. But I assure you, the people at Anthropic worked their asses off, neglecting their families and sleep and they want to show something for their efforts.

It makes sense, just not the sense that some people want.


They're probably feeling the heat from e.g. Google and Gemini which is gaining ground fast so the plan is to speed up the releases. I think a similar thing happened with OpenAI where incremental upgrades were presented as something much more.


I want to also mention that the previous model was 3.7. 3.7 to 4 is not an entire increment, it’s theoretically the same as 3 -> 3.3, which is actually modest compared to the capability jump I’ve observed. I do think Anthropic wants more frequent, continuous releases, and using a numeric version number rather than a software version number is their intent. Gradual releases give society more time to react.


The numbers are branding, not metrics on anything. You can't do math to, say, determine the capability jump between GPT-4 and GPT-4o. Trying to do math to determine capability gaps between "3.7" and "4.0" doesn't actually make more sense.


I think they didn’t have anywhere to go after 3.7 but 4. They already did 3.5 and 3.7. People were getting a bit cranky 4 was nowhere to be seen.

I’m fine with a v4 that is marginally better since the price is still the same. 3.7 was already pretty good, so as long as they don’t regress it’s all a win to me.


I'd like version numbers to indicate some element of backwards compatibility. So point releases (mostly) wouldn't need prompt changes, whereas a major version upgrade might require significant prompt changes in my application. This is from a developer API use point of view - but honestly it would apply to large personality changes in Claude's chat interface too. It's confusing if it changes a lot and I'd like to know!


It works better when using tools, but the LLM itself it is not powerful from the POV of reasoning. Actually Sonnet 4 seems weaker than Sonnet 3.7 in many instances.


The API version I'm getting for Opus 4 via gptel is aligned in a way that will win me back to Claude if its intentional and durable. There seems to be maybe some generalized capability lift but its hard to tell, these things are aligment constrained to a level below earlier frontier models and the dynamic cost control and what not is a liability for people who work to deadlines. Its net negative.

The 3.7 bait and switch was the last straw for me and closed frontier vendors or so I said, but I caught a candid, useful, Opus 4 today on a lark, and if its on purpose its like a leadership shakeup level change. More likely they just don't have the "fuck the user" tune yet because they've only run it for themsrlves.

I'm not going to make plans contingent on it continuing to work well just yet, but I'm going to give it another audition.


I'm finding 4 Opus good, but 4 Sonnet a bit underwhelming: https://evanfields.net/Claude-4/


With all the incremental releases, it’s harder to see the advancement. Maybe it would be more fair to compare 4 vs 3 than 4 vs 3.7.


the big difference is the capability to think during tool calls. this is what makes openAI o3 lookin like magic


Yeah, I've noticed this with Qwen3, too. If I rig up a nonstandard harness than allows it to think before tool calls, even 30B A3B is capable of doing low-budget imitations of the things o3 and similar frontier models do. It can, for example, make a surprising decent "web research agent" with some scaffolding and specialized prompts for different tasks.

We need to start moving away from Chat Completions-style tool calls, and start supporting "thinking before tool calls", and even proper multi-step agent loops.


> If I rig up a nonstandard harness than allows it to think before tool calls

What does that require? (I'm extremely, extremely new to all this.)


Easier rule is just to exclude two letter words, or make two letter words zero points (so you can get rid of your Qs and Zs if you wish


If you exclude two-letter words, you are also excluding nearly all overlaps and parallel plays.


For any interested, the best book I found on this topic is Project Japan (https://a.co/d/fcrNp6p). It dives into the history of the whole Metabolism movement (in my opinion, an effort to create more modular and dynamic architecture and entire city plans that could be deployed, migrated, and repurposed effectively).

Members of this movement created everything from Tokyo’s iconic phone booth, to the ubiquitous soy sauce container, to ski cabins and a plan to dredge the whole tokyo bay to construct a completely designed cityscape, with some truly wild proposals.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: