In my experience, Claude will criticize others more than it will criticize itself. Seems similar to how LLMs in general tend to say yes to things or call anything a good idea by default.
I find it to be an entertaining reflection of the cultural nuances embedded into training data and reinforcement learning processes.
Interesting. In my experience, it's the opposite. Claude is too syncophantic. If you tell it that it was wrong, it will just accept your word at face value. If I give a problem to both Claude and Gemini, their responses differ and I ask Claude why Gemini has a different response - Claude will just roll over and tell me that Gemini's response was perfect and that it messed up.
This is why I was really taken by Gemini 2.0/2.5 when it first came out - it was the first model that really pushed back at you. It would even tell me that it wanted x additional information to continue onwards, unprompted. Sadly, as Google has neutered 2.5 over the last few months, its independent streak has also gone away, and its only slightly more individualistic than Claude/OpenAI's models.
I would guess the training data (conversational as opposed to coding specific solutions) is weighted towards people finding errors in others work, more than people discussing errors in their own. If you knew there was an error in your thinking, you probably wouldn't think that way.
It gives you the benefit of the doubt if you're coding.
It also gives you the benefit of the doubt if you're looking for feedback on your developers work. If you give it a hint of distrust "my developer says they completed this, can you check and make sure, give them feedback....?" Claude will look out for you.
I’m the same way — I skip straight to pricing too.
Curious though: when you get there, do you prefer seeing a few fixed tiers (like the classic “3 bucket” layout), or would you rather have a usage-based formula where you can adjust a slider or input your exact needs and see the price change in real time?
I have been working on a data validation tool for a while. I even tried creating an extended YAML parser for data validation. You made me realize I wasted my time with that approach. Better now than later. I would love to talk to you before I throw away more code. Can we connect?
It seems things are coming back full circle. From what I remember, Guido wanted Python 3 to be a whole new spec and not be backward compatible with Py2. Then they kept adding features to make it backward compatible.
It took over a quarter of the year with one engineer 100% dedicated to it and between 1 to 3 other engineers involved part-time over different stages of the project.
That is a great question. Kubernetes and docker and the recent addition of service mesh layer definitely do make things easier for micro-services. However as another commenter mentioned, Microservices are typically an organizational rather than technical feature. We do use pubsub pattern for distributed processing widely. Regarding the http boundaries point that you brought up, we use gRPC instead of HTTP for most services.
The performance gains of the API were not the by-product of cutting into micro-services. Basically this article is about "We had some issues with the monolith, we cut it into micro-services and also did these other changes along the way that saved us money and gave us performance boost. Changes that we could have done in the monolith but they were less risky and easier to do in the micro-service in comparison to doing them in the monolith."
It's titled faster, cheaper, better:... microservices, implying they were all due to splitting into microservices. But as usual it turns out refactoring was the saviour and microservices achieved jack-squat apart from having something nice to put on his CV.
Lol, Mirco-service architecture is a vehicle that can make it easier to achieve those goals. It is not a black or white solution. There are pros and cons.